paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
nips_2022_IsHRUzXPqhI
SHINE: SubHypergraph Inductive Neural nEtwork
Hypergraph neural networks can model multi-way connections among nodes of the graphs, which are common in real-world applications such as genetic medicine. In particular, genetic pathways or gene sets encode molecular functions driven by multiple genes, naturally represented as hyperedges. Thus, hypergraph-guided embedding can capture functional relations in learned representations. Existing hypergraph neural network models often focus on node-level or graph-level inference. There is an unmet need in learning powerful representations of subgraphs of hypergraphs in real-world applications. For example, a cancer patient can be viewed as a subgraph of genes harboring mutations in the patient, while all the genes are connected by hyperedges that correspond to pathways representing specific molecular functions. For accurate inductive subgraph prediction, we propose SubHypergraph Inductive Neural nEtwork (SHINE). SHINE uses informative genetic pathways that encode molecular functions as hyperedges to connect genes as nodes. SHINE jointly optimizes the objectives of end-to-end subgraph classification and hypergraph nodes' similarity regularization. SHINE simultaneously learns representations for both genes and pathways using strongly dual attention message passing. The learned representations are aggregated via a subgraph attention layer and used to train a multilayer perceptron for subgraph inferencing. We evaluated SHINE against a wide array of state-of-the-art (hyper)graph neural networks, XGBoost, NMF and polygenic risk score models, using large scale NGS and curated datasets. SHINE outperformed all comparison models significantly, and yielded interpretable disease models with functional insights.
Accept
The paper proposed a GNN that explicitly treats hyperedges, and makes use of strongly dual attention, hypergraph regularization, and weighted subgraph attention. The proposed method shows better performance than existing baselines on two genetic medicine datasets. Explainability is also demonstrated. Reviewers originally raised many concerns on presentation (too specialized for the target application), lack of ablation (effectiveness of each proposed component is not clearly shown), novelty (combination of small modifications of existing methods), and explainability (existing methods can do the same). The authors made an amazing job to address most of the concerns: They reported additional ablation results and baseline results, and showed that the proposed method still performs better, and each proposed component plays a significant role. Two reviewers have been convinced by the author's response, while the other two have not, insisting that the novelty issue remains, and with the limited novelty, more careful investigation is required for publication. This is a borderline paper, and I recommend acceptance because I think adjusting existing methods to target applications is important research even if the modifications are small. The proposed method significantly outperforms existing baselines (including the ones reviewers suggested), and the additional ablation study shows each of the proposed components is effective. On the other hand, I also sympathize with the reviewers with negative evaluations on the following comments: "formalising the key differences with existing similar methods (e.g., HyperGAT in lines 159-169) and confirming the differences with convincing (synthetic/real-world) experiments, e.g., on a dataset chosen cleverly to show clear failure of HyperGAT but success of SHINE, would improve the paper's quality." "The paper can be strengthened by positioning strongly dual attention in SHINE with different attention mechanisms in heterogeneous graph neural network literature (some are listed below): Heterogeneous Graph Attention Network, In WWW'19 HetGNN: Heterogeneous Graph Neural Network, In KDD'19 Metapath enhanced graph attention encoder for HINs representation learning, In BigData'19. MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding, In WWW'20 Heterogeneous Graph Transformer, In WWW'20. There is no need to empirically compare and run them as baselines but explaining the key differences conceptually to make hypergraphs a more compelling choice for genetic medicine than heterogeneous graphs can strengthen the paper." I hope the authors would make a bit more effort to incorporate these suggestions in the final version.
train
[ "3Xa1WvXErgo", "EbEeD7kMR5", "RllXdyULtc", "8WQH8TtaC6j", "yam4CVNAqux", "AlqJ2FDqqYo1", "6QnXfIv7CDb", "ZREzLf4f6Ke", "GUgS_KB-X7I", "soQcEa9QjBx", "hR-X65HwHae", "xrsJ4_Q5L2s", "cIwGbi4eZg", "-Ms8W_5J3x", "5YZI5BfHtM", "uZUAcvyQtOf" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer FtLi,\n\nThank you for your constructive feedbacks and suggestions again! We greatly appreciate the additional rigorous ablation studies and state-of-the-art baselines (e.g., AllSetTransformer and AllDeepSets) that you suggested, and our new results have further strengthened the paper. We also highly value your comprehensive suggestions such as on hyperparameter sensitivity and computational complexity, addressing which have further increased the technical value of our paper. We hope that our responses have addressed your concerns and questions appropriately, and that you will consider increasing your score. \n\nBest Regards,\n\nPaper1592 Authors", " Dear Reviewer noVi,\n\nThank you again for your constructive comments and suggestions! They help articulate our contribution, lead to more balanced discussions (e.g., interpretation and societal impact), and result in more rigorous ablation studies that have further strengthened our paper. We hope that our responses have properly addressed your concerns and that you will consider increasing your score. \n\nBest Regards,\n\nPaper1592 Authors", " Dear Reviewer tNZm,\n\nThank you again for your constructive comments and suggestions! We greatly appreciate your suggested baselines/ablations, and our new results have further strengthened the paper. We hope that our responses have properly addressed your questions and concerns and that you will consider increasing your rating. \n\nBest Regards,\n\nPaper1592 Authors", " Dear Reviewer CYi2,\n\nThank you for your valuable comments and questions again! They have helped both clarify the technical aspects and improve the writing of our paper. We hope that our responses have properly addressed your questions and concerns and that you will consider increasing your rating. \n\nBest Regards,\n\nPaper1592 Authors", " ### Navigation summary of new results\nFor easier navigation, we provide clickable links in this followup for our **five additional state-of-the-art baselines and ablation studies** per Reviewers' suggestions as follows, including SubGNN bipartite (Reviewers [tNZm](https://openreview.net/forum?id=IsHRUzXPqhI&noteId=soQcEa9QjBx) #2, [noVi](https://openreview.net/forum?id=IsHRUzXPqhI&noteId=xrsJ4_Q5L2s) #1, [FtLi](https://openreview.net/forum?id=IsHRUzXPqhI&noteId=ZREzLf4f6Ke) #1), weighted subgraph attention (WSA) ablation (Reviewers [tNZm](https://openreview.net/forum?id=IsHRUzXPqhI&noteId=soQcEa9QjBx) #3, [noVi](https://openreview.net/forum?id=IsHRUzXPqhI&noteId=xrsJ4_Q5L2s) #2), MLP replacing dual attention ablation (Reviewer [FtLi](https://openreview.net/forum?id=IsHRUzXPqhI&noteId=ZREzLf4f6Ke) #2), AllSetTransformer (Reviewer [FtLi](https://openreview.net/forum?id=IsHRUzXPqhI&noteId=ZREzLf4f6Ke) #3), AllDeepSets (Reviewer [FtLi](https://openreview.net/forum?id=IsHRUzXPqhI&noteId=ZREzLf4f6Ke) #3). These newly added results and accompanying analysis collectively have further strengthened our paper. Thank you for your constructive suggestions!\n", " ### 5. Computational complexity, training times, algorithm/pseudo code\nThank you for the suggestion. The complexity of SHINE scales as the following factors grow: the numbers of layers and nodes, the number and size of hyperedges, the size of hidden dimensions, and finally the number and size of subhypergraphs. \nWe will include the training times in the supplement. For a high level ballpark, for example on V100 GPU on the TCGA-MC3 dataset: MLP ~5min, HyperGCN ~7min, AllSetTransformer ~20min, AllDeepSet ~20min, SHINE ~30min, HGNN ~30min, HyperGAT ~30min, SubGNN >1day (excluding prebuild time).\n\n", " ### 4. Potential negative societal impact\nThank you for suggesting the discussions on potential negative societal impact and we will add the following paragraph to Discussion.\n“The techniques and results presented in the paper could apply to many diseases through informing genetic medicine practice. In these real-world applications, subject’s genetic profile may contain individual characterizing information. Thus, this work, or derivatives of it, should never be used in violation of individual’s privacy. For using individual level dataset such as the TCGA-MC3, the proper steps of IRB review of study and execution of data user agreement need to be properly completed prior to the study, such as done by this study.”\n", " Thank you for the detailed review and constructive suggestions!\n### 1. Compare with SubGNN bipartite \nThank you for suggesting the baseline and important reference “A Survey on Heterogeneous Graph Embedding: Methods, Techniques, Applications and Sources”. We will properly cite the reference. We have conducted the suggested experiment, and will add the following results. **New Results** SubGNN bipartite has held-out test set micro-F1 of 0.6137 $\\pm$ 0.0097 on the DisGeNet dataset, having non-overlapping standard deviation intervals, in fact wide separation, with the results from SHINE. The experiment on the TCGA-MC3 dataset has taken over a couple of days and has not finished yet, but from what has come out, we are confident that the comparison will be similar as the case of the DisGeNet dataset. We will update the results to the revision when the full SubGNN experiment on the TCGA-MC3 dataset has finished. \n### 2. MLP replacing dual attention ablation\nThank you for suggesting the important ablation. We have conducted the suggested study and will add the following results. **New Results** MLP replacing dual attention message passing has obtained held-out test set micro-F1 of 0.6331 $\\pm$ 0.0056 on DisGeNet dataset and 0.4249 $\\pm$ 0.0165 on TCGA-MC3 dataset, both having non-overlapping standard deviation intervals with those from SHINE, separated by a large margin. We add the following to Discussion. “This has suggested that although MLP has been frequently used to approximate a target function, in the setting of large hypergraph (e.g., both hypergraphs have numerous thousand-nodes hyperedges), it can still be quite challenging to approximate an ideal target function and explicit dual attention formulation wins out.”\n### 3. Differences with HyperGAT, and set-based methods AllSet\nGreat point and suggestions! Key differences with HyperGAT: strongly dual attention (see next paragraph) and hypergraph regularization. HyperGAT has no hypergraph regularization that nodes with similar context of pathways (similar molecular functions) should have similar representations. So HyperGAT has no built-in measures to prevent or discourage genes belonging to the same functional class (e.g., promoting immune reactions) from having drastically different representations (e.g., opposite directions), a phenomenon that will pose interpretation difficulty.\n\nWe will properly cite the seminal work AllSet, which subsumes HyperGAT as a special case. Strongly dual attention explores the hypergraph propagation from a different angle than both HyperGAT and AllSet. Compared to HyperGAT, strongly dual attention has the calculation of hyperedge’s and node’s attentions share the same underlying dual-attention matrix as shown in Fig. 1 (b). Such parameter sharing is meant to preserve the interchangeable nature of hypergraph’s nodes and hyperedges, and guarantee that (H*)* = H. When analogously applied to AllSet, strongly dual attention means to explore attention context sharing between fV->E and fE->V and will be a direction for future work. \n\n**New Results** We followed your suggestions of AllSetTransformer and AllDeepSets as baselines, and will add the following results and accompanying discussions. Their held-out test set micro-F1 are AllDeepSets: 0.6309 $\\pm$ 0.0147 (DisGeNet), 0.4324 $\\pm$ 0.0220 (TCGA-MC3); AllSetTransformer: 0.6355 $\\pm$ 0.0160 (DisGeNet), 0.4904 $\\pm$ 0.0158 (TCGA-MC3). The results from both AllDeepSets and AllSetTransformer have non-overlapping standard deviation intervals, in fact wide separation, with their counterparts from SHINE. These results echo with our observations that strongly dual attention explores the hypergraph propagation from a different angle than both AllDeepSets and AllSetTransformer, and suggest that effectively combining both angles could be an interesting future direction.\n\n### 4. Hidden dimensions sensitivity, optimal K, hyperparameter details \nWe will include a sensitivity analysis of hyperparameters in the supplement. In general, the performance is less sensitive to the hidden dimensions when it is at sufficiently big (300-500), with <0.05 change in micro-F1 score. Smaller hidden dimensions (100-200) can lead to >0.05 micro-F1 drop, likely due to insufficient representation power.\n\nWe clarify that we varied the number K of layers from 1 to 4, and found that 2 layers to give the best results for SHINE. To better help an expert to reproduce the results of the paper and build effective models from scratch, we will add the download links from the mentioned data sources, and provide detailed preprocessing steps to create the datasets we used. We will also provide the best hyperparameters for the models in addition to the number K of layers. \n", " We thank all Reviewers for their time and feedback! We were glad to see that our work was in general positively received, and that Reviewers commented that “Explicitly treating hyper edges as first class citizens in the GNN modelling is of interest, since in this was hyper edges can be the subjects of notions of regularisation or attention” (Reviewer tNZm), “The authors present a novel and niche problem of sub-hypergraph representation learning which has not been explored in the GNN community. … (strongly) dual attention is aligned with the dual form of hypergraphs, and I can see the novelty of this paper here” (Reviewer noVi), “the model compares favorably with respect to the considered competitors” (Reviewer CYi2), “The paper is well organised” (Reviewer FtLi). We also appreciate their questions, comments, and suggestions. \n\n**New Results** We have followed the Reviewers’ suggestions and conducted further experiments on **five additional state-of-the-art baselines and ablation studies**, including SubGNN bipartite (Reviewers tNZm, noVi, FtLi), weighted subgraph attention (WSA) ablation (Reviewers tNZm, noVi), MLP replacing dual attention ablation (Reviewer FtLi), AllSetTransformer (Reviewer FtLi), AllDeepSets (Reviewer FtLi). Please refer to Reviewer specific responses for the newly added results and accompanying analysis, which collectively have further strengthened our paper. \n\nWe provide answers to the main points raised by each Reviewer below, and outline changes that we plan to address in a potential revision. Please feel free to follow up with us! We very much welcome any feedback that can further strengthen the paper.\n", " Thank you for the detailed review and constructive suggestions!\n### 1. Is the regularisation effective or of importance\nThank you for pointing out the importance of the ablation case of the hypergraph regularization. We clarify that our Supplemental Table 3 has presented this case: SHINE without hypergraph regularization has held-out test set micro-F1 of 0.6829 $\\pm$ 0.0059 on the DisGeNet dataset and 0.5247 $\\pm$ 0.0048 on the TCGA-MC3 dataset, both having non-overlapping standard deviation intervals with their counterparts from SHINE. This suggests that that adding hypergraph regularization further improves performance. We will add the following discussion to provide intuition. “It is known that Graph Convolutional Network suffers from oversmoothing when the number of layers increases, as increasingly globally uniform representation of nodes may be developed. On the other hand, attention could limit this phenomenon by limiting to a restricted set of nodes. The effect of hypergraph regularization, while also smoothing, happens on a local scale as part of a direct optimization objective and does not accumulate with increasing number of layers. Such decoupling between attention and local smoothing allows SHINE to better explore the optimization landscape.” Given the shared interest on ablation studies, we will move our ablation analysis discussions to the main paper, integrating additional ablation cases suggested by you and other Reviewers. \n\n### 2. Adding experiment on hypergraph as a bi-partite graph\nThank you for suggesting this important experiment. We agree that the importance of strong duality would follow naturally from comparing SHINE with hypergraph as a bi-partite graph where hyperedges are materialized as the nodes of one part and the genes are the nodes of the other part. We have conducted the suggested experiment based on the implementation of SubGNN, and will add the following results. **New Results** SubGNN bipartite has held-out test set micro-F1 of 0.6137 $\\pm$ 0.0097 on the DisGeNet dataset, having non-overlapping standard deviation intervals, in fact wide separation, with the results from SHINE. The experiment on the TCGA-MC3 dataset has taken over a couple of days and has not finished yet, but from what has come out, we are confident that the comparison will be similar as the case of the DisGeNet dataset. We will update the results to the revision when the full SubGNN experiment on the TCGA-MC3 dataset has finished.\n\n### 3. WSA (the weighted subgraph attention) ablation study, I.e., what if we consider a subgraph simply the sum of the nodes (genes) that are of interest (with mutations) for each patient (subgraph)\nThank you for suggesting this important ablation case. We have followed your suggestion on assessing the impact of the introduction of WSA (the weighted subgraph attention) and will add the following results. **New Results** SHINE without WSA has obtained held-out test set micro-F1 of 0.6472 $\\pm$ 0.0053 on the DisGeNet dataset and 0.4388 $\\pm$ 0.0091 on the TCGA-MC3 dataset, both having non-overlapping standard deviation intervals with their counterparts from SHINE, separated by a wide margin. We add the following to the Discussion. “We also notice that the performance drop due to WSA ablation on the TCGA-MC3 dataset is larger than that on the DisGeNet dataset. This is consistent with the fact that the TCGA-MC3 dataset has denser hypergraph and larger subgraphs than the DisGeNet dataset. This is also consistent with the fact that differentiating among cancer subtypes is a more complex and nuanced task than differentiating among disease categories. These observations collectively argue for the benefits of weighted subgraph attention over direct aggregation such as sum, and more increasingly so for larger datasets and more complex tasks.”\n### 4. Additional experiments to quantify the relative importance of the various ideas introduced would strengthen the paper.\n**New Results** Many thanks to the ablation cases suggested by you (e.g., points 1, 2, 3) and the other Reviewers, we now have a further expanded and comprehensive experiments and results by adding **five additional state-of-the-art baselines and ablation studies**, including SubGNN bipartite (Reviewers tNZm, noVi, FtLi), WSA ablation (Reviewers tNZm, noVi), hypergraph regulation ablation (Reviewer tNZm), MLP replacing dual attention ablation (Reviewer FtLi), AllSetTransformer (Reviewer FtLi), AllDeepSets (Reviewer FtLi). Please refer to Reviewer specific responses for these newly added results and accompanying analysis, which have indeed further strengthened the paper, thank you for the constructive suggestions!\n", " Thank you for the detailed review and constructive suggestions!\n### Presentation of model tightly interconnected with genetic medicine, a more general description can improve the presentation\nThank you for suggesting a better presentation strategy for the model and will conduct an overhaul to make the model description more general. For example, in the description of hypergraph learning, we will consistently refer to hyperedges instead of genetic pathways and leave the interconnection with genetic medicine starting from the description of experiments. \nWe will also add the following discussion to explain why the field of genetic medicine is more general than it appears, and in fact, impacts the whole field of medicine. “The field of genetic medicine encompasses areas of molecular biology and clinical phenotyping to explore new relationships between disease susceptibility and human genetics. Though appearing as a single field, it revolutionizes the practice of medicine in preventing, modifying and treating many diseases such as hypertension, obesity, diabetes, arthrosclerosis, and cancer (see Green et al; reference below). Our carefully chosen experimental datasets simultaneously considers the availability of large public data and the demonstration of general medicine applications: TCGA-MC3 being across all major cancer types, and DisGeNet being across all major disease categories.” \n\n_Green, Eric D., et al. \"Strategic vision for improving human health at The Forefront of Genomics.\" Nature 586.7831 (2020): 683-692._\n### Making paper easier to follow for non-bioinformatics readers\nThank you for suggesting better accessibility of the paper by general audience. We add to Introduction the following elaboration and example illustrating some intuitions behind the proposed model, in which we tried to use descriptive language instead of jargons for easier readability by general audience. We will similarly overhaul other places in the paper with more descriptive and accessible languages when discussing genetic medicine applications.\n\n“Genetic pathways are a valuable tool to assist in representing, understanding, and analyzing the complex interactions between molecular functions. The pathways contain multiple genes (can be modeled using hyperedges) and correspond to genetic functions including regulations, genetic signaling, and metabolic interactions. They have a wide range of applications including predicting cellular activity and inferring disease types and status (see Alon 2019; reference below). For a simplified and illustrative example, a signaling pathway p1 (having 20 genes) sensing the environment may govern (the governing function embodied as a pathway p2 having 15 genes) the expression of transcription factors in another signaling pathway p3 (having 23 genes), which then controls (the controlling function embodied as a pathway p4 having 34 genes) the expression of proteins that play roles as enzymes in a metabolic pathway p5 (having 57 genes). In general, there will be partial overlap between pathways p1 and p2, p2 and p3, p3 and p4, p4 and p5, and other potential partial overlaps corresponding to partial overlaps between their corresponding hyperedges.”\n\n_Alon U. “An introduction to systems biology: design principles of biological circuits”. CRC press; 2019._\n\nWe also add to Supplement more background and context information for general audience, for example, explaining what the datasets look like, as follows: \n“The genetic variants are stored in a specially formatted file. A row in the file specifies a particular variant (e.g., Single Nucleotide Polymorphism or insertion/deletion), its chromosomal location, and what proportion of the sequencing reads covering that chromosomal location have that variant, among other characteristics.”\n\n### Effect of the number K of layers\nThank you for suggesting discussion/analysis on K. We add the clarification that we varied the number K of layers from 1 to 4, and found that 2 strongly dual attention layers (followed by a weighted subgraph attention layer) gave the best results for SHINE. For the original and added baselines/ablations (e.g., SubGNN, AllSetTransformer, AllDeepSet) models, we followed their respective papers in selecting the number K of layers, e.g., also varying K from 1 to 4 for SubGNN. \n\nWe completely agree with the Reviewer’s assessment and add the following to Discussion. “It is known that Graph Convolutional Network suffers from oversmoothing when the number of layers increases, as increasingly globally uniform representation of nodes may be developed. On the other hand, attention could limit this phenomenon by limiting to a restricted set of nodes. The effect of hypergraph regularization, while also smoothing, happens on a local scale as part of a direct optimization objective and does not accumulate with increasing number of layers. Such decoupling between attention and local smoothing allows SHINE to better explore the optimization landscape.” \n", " Thank you for the detailed review and constructive suggestions!\n\n### 1. More analysis of Strongly Dual Attention; compare with SubGNN; more rigorous ablation studies on the architecture\nThank you for suggesting more analysis of Strongly Dual Attention and for suggesting SubGNN as a baseline to justify the hypergraph representations of specific problems in this work. As Reviewer tNZm also pointed out, the importance of strong duality would also follow naturally from such comparison. **New Results** We have conducted the suggested experiment, and will add the following results. SubGNN bipartite has held-out test set micro-F1 of 0.6137 $\\pm$ 0.0097 on the DisGeNet dataset, having non-overlapping standard deviation intervals, in fact wide separation, with the results from SHINE. The experiment on the TCGA-MC3 dataset has taken over a couple of days and has not finished yet, but from what has come out, we are confident that the comparison will be similar as the case of the DisGeNet dataset. We will update the results to the revision when the full SubGNN experiment on the TCGA-MC3 dataset has finished. We tried SubGNN clique expansion. However, the program was killed after exhausting all 1TB RAM on our server, due to large hypergraph and hyperedges.\n\nIn addition, in Supplement Table 3, the comparison between “HyperGAT (not strictly dual attention)” and “SHINE without hypergraph regularization” is a direct ablation comparison of strongly dual attention where the difference is whether or not to retain parameter-sharing for hyperedge and node attention. As we mentioned in the section of WSA, HyperGAT does not directly support subgraph inferencing, and we added our WSA module to those models for subgraph inferencing. As can be seen, “SHINE without hypergraph regularization” clearly outperforms “HyperGAT (not strictly dual attention)” demonstrating the utility of strongly dual attention. \n\n**New Results** For more rigorous ablation studies on the architecture, we have followed you and the other Reviewers’ suggestions and added **5 state-of-the-art baselines and ablation studies**, including SubGNN bipartite (Reviewers tNZm, noVi, FtLi), weighted subgraph attention (WSA) ablation (Reviewers tNZm, noVi), MLP replacing dual attention ablation (Reviewer FtLi), AllSetTransformer (Reviewer FtLi), AllDeepSets (Reviewer FtLi). Please refer to Reviewer specific responses for the newly added results and accompanying analysis, which collectively have further strengthened our paper. \n\n### 2. WSA ablation study\nThank you for suggesting this important ablation case and suggesting the important citation [2] GATED GRAPH SEQUENCE NEURAL NETWORKS. We will properly cite it and will add the WSA ablation results where in general models without WSA show clear performance drop. **New Results** In particular, SHINE without WSA has obtained held-out test set micro-F1 of 0.6472 $\\pm$ 0.0053 on DisGeNet dataset and 0.4388 $\\pm$ 0.0091 on TCGA-MC3 dataset, both having non-overlapping standard deviation intervals with their counterparts from SHINE, separated by a wide margin. We add the following to the Discussion. “We also notice that the performance drop due to WSA ablation on the TCGA-MC3 dataset is larger than that on the DisGeNet dataset. This is consistent with the fact that the TCGA-MC3 dataset has denser hypergraph and larger subgraphs than the DisGeNet dataset. This is also consistent with the fact that differentiating among cancer subtypes is a more complex and nuanced task than differentiating among disease categories. These observations collectively argue for the benefits of weighted subgraph attention over direct aggregation such as sum, and more increasingly so for larger datasets and more complex tasks.”\n\n### 3. Model interpretation by attention as strength\nWe clarify that we did not mean to characterize that model interpretation by attention is an exclusive strength of SHINE. As the Reviewer correctly pointed out, other attention models such as HyperGAT can also provide attention based model explanation. Currently few quantitative interpretation comparison methods (see Yuan et al.; reference below) are applicable to compare the genetic pathways interpretation, as genetic pathways correspond to molecular functions and it is hard to quantify a molecular function’s value to a living organism. However, we do note that HyperGAT has no hypergraph regularization that nodes with similar context of pathways (molecular functions) should have similar representations. Thus HyperGAT has no built-in measures to prevent or discourage genes belonging to the same functional class (e.g., promoting immune reactions) from having drastically different representations (e.g., opposite directions), a phenomenon that will pose interpretation difficulty. \n\n_Yuan H, Yu H, Gui S, Ji S. Explainability in graph neural networks: A taxonomic survey. arXiv preprint arXiv:2012.15445. 2020 Dec 31._\n\n\n\n", " The paper proposes a hypergraph neural network model exploiting a double attention mechanism in the message passing scheme. The overall architecture is designed to process sub-hypergraphs once representations are computed for the nodes and edges. The learning objective includes a regularization term based on the hypergraph laplacian.\nThe proposed model is evaluated on disease classification based on gene-genetic pathways data, showing higher F1 values with respect to a set of competitors.\nFinally, due to the attention mechanism intepretions in terms of gene pathways (hyperedges) can be derived from the model outputs. Strengths\n- the model incorporates a dual attention mechanism applied to nodes and hyperedges respectively, exploiting the same attention context vector. This desing choice is claimed to prevent overfitting reducing the number of parameters\n- the architecture includes an attention module to derive subgraph representations. This scheme allows the application of the mode in a inductive setting\n- the model compares favourably with respect to the considered competitors on two benchmarks in genetic medicine\n\nWeaknesses\n- The presentation of the model is tightly interconnected with the proposed application in genetic medicine, making it appear less general\n- Given the focus on bioinformatics, the paper is hard to follow for readers not completely familar with this topic.\n- The effect of the number K of layers is not investigated (maybe I missed it, but the number of layers used in the experiments is not reported). It is known that (Convolutional) Graph Neural Network suffer from oversmoothing when the depth of the network increases. This may hinder the results in some applications since uniform representation of nodes are developed. Attention may perhaps limit this phenomenon, but on the other side node/edge regularization may produce add a related effect. The ablation study show a positive effect of regularization, but some discussion/analysis should be provided.\n\n How performances are affected by the number K of layers? The authors discuss some limitations that need futher work in section of the paper both for the model architecture and the specific application considered for the evaluation.\nAs listed in the weaknesses, a more general description (not tailored for the considered task in genetic medicine) would have improved the presentation of the proposed hypergraph neural network architecture.", " The authors propose a GNN approach to learn embeddings of sub-hyper-graphs. The approach has an explicit treatment of hyper edges, e.g., it does not resort to clique expansion, and makes use of a regulariser based on the hyper graph laplacian. The application chosen is that of disease prediction for patients (modelled as sub hyper graphs) given a pathway network (modelled as a hyper graph with genes as nodes and sets/pathways as hyper edges). Pos\nExplicitly treating hyper edges as first class citizens in the GNN modelling is of interest, since in this was hyper edges can be the subjects of notions of regularisation or attention.\n\nNeg\nThe relative importance of the various ideas introduced is not clear, i.e., a better experimental design with clearer baselines and an ablation study is warranted. More specifically:\n\n1. is the regularisation effective or of importance? (ablation case)\n\n2. is the proposed architecture much different from using a standard graph neural network with attention on a pre-processed hyper graph? In particular the pre-processing could consist in representing an hyper graph as a bi-partite graph where hyper edges are materialised as the nodes of one part and the genes are the nodes of the other part. (experiment)\nNote that the whole discussion regarding strong duality would follow automatically in this case.\n\n3. is the introduction of WSA (the weighted subgraph attention) needed? what happens if we replace the whole subgraph treatment by Mji directly? I.e., what if we consider a subgraph simply the sum of the nodes (genes) that are of interest (with mutations) for each patient (subgraph), that is, we could learn directly the embedding of the nodes/hyperedges for the classification task when they are simply summed up for each patient. (experiment/ablation) Additional experiments to quantify the relative importance of the various ideas introduced would strengthen the paper.\nIn particular consider offering support to reply to the points 1, 2 and 3 raised as negative elements in the Strengths And Weaknesses section. yes.", " This paper suggests sub-hypergraph representation learning, a niche problem related to subgraph and hypergraph learning. To tackle this problem, the authors propose the SHINE (SubHypergraph Inductive Neural nEtwork) model, which consists of three modules: strongly dual attention message passing, hypergraph regularization, and weighted subgraph attention. Experiments on two real-world datasets demonstrate the superiority of SHINE on performance (against baselines including GNNs for hypergraphs) and interpretation (using attention). ## Strengths\n\nThe authors present a novel and niche problem of sub-hypergraph representation learning which has not been explored in the GNN community. A specific example (cancer patients as subgraphs of genes in hypergraphs) can be a practical application for this task. The performance improvement by the authors’ approach is significant.\n\n## Weaknesses\n\nHowever, I think this paper is not ready for publication for the following reasons.\n\nFirst, the technical novelty of SHINE is limited. This model consists of several parts, and each of them is a slightly modified version of existing approaches. Using the attention to both nodes and (hyper) edges is presented in HyperGAT, and the authors are aware of it. Nevertheless, the idea of (strongly) dual attention is aligned with the dual form of hypergraphs, and I can see the novelty of this paper here. However, explicit regularization by Laplacian (Hypergraph regularization) [1] and pooling by attention weights (Weighted Subgraph Attention) [2] are well-known methods in using GNNs. In this case, SHINE's novelty is limited to Strongly Dual Attention, and a more detailed analysis of this part is required.\n\nSecond, related to the first paragraph, there are no rigorous ablation studies on the architecture. As many submodules make up the model, it is necessary to study where the performance gain comes from. In the supplementary material, only the ablation study on hypergraph regularization is presented, and the study on dual attention message passing is presented by comparison with HyperGAT. However, there are other differences in attention forms between HyperGAT and SHINE, and comparing these two does not provide a fully controlled ablation study of dual attention message passing. I recommend authors retain all parts except parameter-sharing in the attention. In addition, the performance comparison between SHINE with/without WSA and other GNNs with/without WSA also should be presented.\n\nThird, it is skeptical that model interpretation by attention is an exclusive strength of SHINE. There are learned attentions in other attentional models like HyperGAT. Can these models provide interpretation at the same level as SHINE? Can you compare interpretations between models? Does SHINE give more precise explanations than other models?\n\nLastly, there are missing baselines for subgraph classification; in particular, the SubGNN can be a strong baseline. Of course, SubGNN is not designed for hypergraphs, but it is straightforward to create graphs from hypergraphs such as clique expansion. The transformation from hypergraphs to graphs is done only once before training; thus, it has a low overhead. Comparing SHINE and GNNs-for-subgraphs can justify that these specific problems in this work should be represented as a hypergraph.\n\n## References\n\n- [1] Learning with Hypergraphs: Clustering, Classification, and Embedding\n- [2] GATED GRAPH SEQUENCE NEURAL NETWORKS\n My questions are summarized in the weaknesses section.\n The authors do not address the potential negative societal impact of their work. This paper targets a high-level machine learning problem called sub-hypergraph representation learning; however, all datasets are related to a particular area, genes, pathways, and diseases. There could be a potential societal impact that should be considered in real-world applications in this area (e.g., privacy). It would be nice if the authors addressed this point.\n", " Hypergraph neural networks can exploit multi-way connections in relational datasets but they are underexplored in domains such as genetic medicine. In this paper, a hypergraph attention-based message passing neural network is proposed for sub(hyper)graph-level tasks, e.g., \n* genes: nodes, \n* pathways: hyperedges, \n* patients: subgraphs, \n* predict cancer type of patient: task. \n\nExperiments on genetic medicine datasets demonstrate the effectiveness of the proposed method SHINE: SubHypergraph Inductive Neural nEtwork. **Originality**\n\nEven though the paper explores an underexplored research topic in an interesting domain (subgraph representation learning for hypergraphs in genetic medicine), the methods proposed are incremental extensions of existing methods and not novel combinations of existing techniques.\n\nSpecifically in section 3.3., the ideas of hyperedge attention over nodes and node attention over hyperedges with parameter sharing are incremental extensions of well-known hypergraph attention networks.\n\nBy viewing the nodes and hyperedges as two types of vertices of a (bipartite) heterogeneous graph, the ideas of strongly dual attention mechanisms would be incremental extensions of existing attention-based methods for heterogeneous graphs, e.g., see \"A Survey on Heterogeneous Graph Embedding: Methods, Techniques, Applications and Sources\"\n\n\\\n**Quality**\n\nThe authors have discussed interesting weaknesses of their work (in addition to highlighting the strengths).\n\nMoreover, baseline comparison (Table 3), interpretability analysis (Table 4), and ablation study (Table 3 in supplementary) support the claims made in the paper empirically to an extent.\n\nHowever, formalising the key differences with existing similar methods (e.g., HyperGAT in lines 159-169) and confirming the differences with convincing (synthetic/real-world) experiments, e.g., on a dataset chosen cleverly to show clear failure of HyperGAT but success of SHINE, would improve the paper's quality. \n\n\\\n**Clarity**\n\nThe paper is well organised.\n\nDetails on datasets and hyperparameter tuning could help an expert to reproduce the results of the paper and build effective models (those with the best hyperparameters) from scratch.\n\nA discussion on computational complexity and an algorithm/pseudo code would further enhance the clarity of the paper.\n\n\\\n**Significance**\n\nIt is unclear from the paper why modelling genetic medicine datasets with hypergraphs, despite being a natural choice, is the best choice compared to straightforward alternatives.\n\nMore specifically, it is unclear why a (bipartite) heterogenous graph with genes: nodes of type 1, pathways: nodes of type 2, patients: (sub) heterogeneous graph would not be a reasonable choice.\n\nThe paper can be improved by positioning and comparing with set-based methods for exploiting hyperedges in hypergraphs, e.g., You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks, In ICLR'22. 1. How are the attention mechanisms in HyperGAT and SHINE different from existing attention mechanisms in heterogeneous graph neural networks with two different node types, e.g., treat genes and pathways as two different node types?\n2. How does a simple feed-forward neural network with hypergraph regularisation and subgraph attention perform? This baseline is basically an ablated baseline with strongly dual attention message passing removed from SHINE and replaced with an MLP.\n3. Is the strongly dual attention message passing scheme of SHINE an instantiation of the multiset formulation of \"You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks, In ICLR'22\"?\n4. What are the training times of all the methods (SHINE and all baselines)?\n5. How sensitive are the hyperparameters (e.g., hidden dimensions) to the performance of SHINE? What is the optimal number of hidden layers in SHINE? The authors have addressed the limitations and potential negative societal impacts adequately." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "AlqJ2FDqqYo1", "6QnXfIv7CDb", "soQcEa9QjBx", "hR-X65HwHae", "nips_2022_IsHRUzXPqhI", "ZREzLf4f6Ke", "xrsJ4_Q5L2s", "uZUAcvyQtOf", "nips_2022_IsHRUzXPqhI", "-Ms8W_5J3x", "cIwGbi4eZg", "5YZI5BfHtM", "nips_2022_IsHRUzXPqhI", "nips_2022_IsHRUzXPqhI", "nips_2022_IsHRUzXPqhI", "nips_2022_IsHRUzXPqhI" ]
nips_2022_0TDki1mlcwz
LASSIE: Learning Articulated Shapes from Sparse Image Ensemble via 3D Part Discovery
Creating high-quality articulated 3D models of animals is challenging either via manual creation or using 3D scanning tools. Therefore, techniques to reconstruct articulated 3D objects from 2D images are crucial and highly useful. In this work, we propose a practical problem setting to estimate 3D pose and shape of animals given only a few (10-30) in-the-wild images of a particular animal species (say, horse). Contrary to existing works that rely on pre-defined template shapes, we do not assume any form of 2D or 3D ground-truth annotations, nor do we leverage any multi-view or temporal information. Moreover, each input image ensemble can contain animal instances with varying poses, backgrounds, illuminations, and textures. Our key insight is that 3D parts have much simpler shape compared to the overall animal and that they are robust w.r.t. animal pose articulations. Following these insights, we propose LASSIE, a novel optimization framework which discovers 3D parts in a self-supervised manner with minimal user intervention. A key driving force behind LASSIE is the enforcing of 2D-3D part consistency using self-supervisory deep features. Experiments on Pascal-Part and self-collected in-the-wild animal datasets demonstrate considerably better 3D reconstructions as well as both 2D and 3D part discovery compared to prior arts. Project page: https://chhankyao.github.io/lassie/
Accept
This paper had substantial discussion amongst reviewers, and concluded with mixed reviews (7, 7, 7, 4). The positive reviewers (mH51, rsf7, LjEQ) actively and strongly championed the paper. The remaining concern comes from boNQ, who (in reviewer-to-reviewer discussions) was primarily concerned with making sure that the authors are clear about specifying limitations in method and evaluation. In particular, boNQ was concerned about making sure the following aspects were clearly and prominently discussed in the paper: (a) generalization to new images; (b) assumption of the rest-pose prior; (c) lack of adjusting the shape per-instance; and (d) non-guarantee of gaps between parts. Additionally, boNQ was concerned about keypoint-based evaluations as a proxy for 3D (although was convinced by the videos in the supplemental). The AC has examined the paper, reviews, and discussion, and is inclined to agree with the accepting reviewers. The paper will be a strong contribution to the literature and will be of great interest the community. However, the AC notes that many of the reviewers may have similar questions to boNQ. Thus, the AC strongly encourages the authors to address boNQ's concerns in the final version of the paper. With the additional space, the AC believes it will be feasible to specify the limitations more clearly and earlier in the manuscript, and also somehow incorporate additional visualizations demonstrating the effectiveness of the reconstructions. While there is no mechanism for enforcing these changes, they will substantially improve the reception of the paper.
train
[ "6HEUTxqH4_0", "yU1CFFpb6Z", "yHiCUhkT7lv", "Z41TEJOoCG1", "Qt82lSP0Krl", "9xPLXT1AhtJ", "YFcBFuprjb", "gyzxOGH74_7", "1ylsrqlhqh", "dYUfJyHObmW", "iWwjzdo9lCB", "5G9bqMqZbzA", "Vdb6l9FlDql", "YedAzEZVM8Md", "Bzlv9nj2eB_", "Im5D4RRxfou", "VwTi-2vjRJT", "Ecz3KPbPn_O", "qTHSMJFN7Gh", "ASzaLqRnCmD", "9czt0pTJERN" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' detailed response. I like the new experiments on novel image \"inference\". I am also satisfied with the response to the current limitations and have no further questions. I think this submission present important insight on reconstructing articulated objects from sparse images.", " Thank you for the feedback. We will clarify the differences to UMR and optimization steps in the paper as well. It would be great if you could take our rebuttal into consideration and update the rating accordingly.", " Thank you; that clarifies the difference well. The paper will benefit from having such description in it.", " I appreciate the response, both to my review and the other reviews. At this point, I am satisfied with the answers and don't have any further questions.", " Hi Reviewers,\n\nThe discussion period is closing soon. Please take a look at the responses from the authors. If you have further questions, please ask them now, since the authors will be unable to respond soon. It's substantially more productive, effective, and reasonable to have a quick back-and-forth with authors now than to raise additional questions or concerns post-discussion period that the authors are unable to address. \n\nThanks,\n\nAC", " **The use of image features and part segmentations in LASSIE and UMR**\n\nAt a high-level, it appears that both UMR and LASSIE start from 2D part segmentations and then lift them onto 3D. A key difference is that LASSIE directly discovers parts in 3D using 2D feature consistency, whereas UMR uses 2D part consistency via canonical UV maps. More specifically, UMR uses 2D parts (not 2D features) and 2D UV maps as common canonical part representations. Instead, LASSIE directly optimizes 3D part features with feature based loss (not 2D parts) and LASSIE does not use UV maps like in UMR. We only use 2D parts for feature initialization, not during optimization. That is, we discover parts directly in 3D with skeleton constraints, without too much reliance on intermediate 2D part discovery. Hence, LASSIE is less sensitive to the issues in 2D part segmentations compared to UMR. Next, we discuss more technical differences in the optimization.\n\n---\n**EM-style optimization** \n\nAlthough LASSIE and UMR both utilize an EM-style strategy for surface feature optimization, there are several differences caused by the problem setting and 2D features we deal with. UMR utilizes a canonical UV map and a pre-defined UV-to-3D mapping to soft-assign mesh surfaces to SCOPS parts. In the E-step, UMR updates the 3D reconstruction network using a part-level probability loss and vertex projection loss through semantic rendering, which are based on the coarse part estimations from SCOPS. In the M-step, the canonical UV maps are updated by averaging across all instances. We observe that learning a high-quality canonical surface map is a suboptimal choice in our framework due to the sparse image setting and multi-surface representation. Instead, we directly exploit the 2D image features (DINO) to define our semantic consistency loss. In our E-step, we assign the average DINO features to the corresponding 3D surfaces, making the 3D surface features more fine-grained compared to UMR. In the M-step, we apply the semantic consistency loss defined between dense 3D points and 2D pixels, unlike the part-level constraints in UMR limited by the number of parts in SCOPS. Note that we only use the 2D feature clusters for feature initialization and thus the semantic consistency loss is not limited by the number of 2D parts.\n\n---\n**SCOPS v.s. DINO features**\n\nEven though both DINO and SCOPS are not our contributions, we would like to comment on their differences for the sake of completeness. We find that DINO-ViT features generalize better to novel objects compared to VGG features used in SCOPS part discovery. Moreover, recent work [1] showed that DINO features can discover more consistent and higher resolution parts compared to SCOPS. Our technical contributions in 3D part discovery are orthogonal to the advances in 2D part discovery.\n\n[1] Amir et al. \"Deep vit features as dense visual descriptors.\" arXiv preprint. 2021.", " **Clarification of optimization algorithm**\n\nThat is correct. We will add the pseudo-code to the supplemental material and clarify it in the manuscript.\n", " **Evaluation protocol**\n\nWe acknowledge that 2D keypoint transfer is not a direct 3D evaluation metric, but it is the standard practice in related literature in the absence of 3D annotations. In our experiments, 2D PCK accompanied by visual verification of multi-view results forms a reliable evaluation of 3D reconstruction quality. Although a model can achieve high keypoint transfer accuracy (PCK) by producing accurate canonical surface maps on the source and target images, we argue that the accurate surface maps are not easy to obtain in practice. First, the 2D (DINO) feature maps cannot serve as a canonical surface mapping since several animal parts (e.g. front v.s. back legs, left v.s. right legs, tail hair v.s. head hair) share similar 2D features (see DINO feature clusters in supplemental Figures 10 and 11). We show in the table below that directly using such feature maps will suffer from 2D ambiguity and result in low PCK. On the other hand, defining an accurate 2D-to-3D mapping requires either manual labeling (A-CSM) or fitting on large-scale images (UMR), both of which are not available in our scenario. Moreover, our 3D representation involves combining multiple 3D neural surfaces for articulated shape, imposing more difficulties in defining the surface mapping in 2D. Due to these, our model can only achieve high PCK by an accurate estimation of 3D camera, pose and shapes. Please note that we also include other metrics like mask IOU and part transfer to evaluate the faithfulness to input images and dense correspondence between images. We kindly ask reviewer to take all the quantitative metrics and qualitative comparisons into consideration when evaluating the effectiveness of our method. We hope to extend LASSIE to more object classes with 3D ground-truths in the future work.\n\n**Table-T5**: Keypoint transfer (PCK\\@0.1) evaluation of using DINO features as surface mapping on horse and zebra images. In DINO-nn, we transfer each source keypoint to the target image by finding the pixel with most similar DINO features (nearest neighbor).\n| Method | Horse | Zebra |\n| :----------- | :-----------: | :-----------: |\n| DINO-nn | 59.7 | 62.4 |\n| LASSIE | 73.0 | 79.9 |", " Thanks once more for clarifications. I find that due to the combination of (1) the method minimising distance between re-projected points, 2) method evaluated on the training set, evaluating PCK is a weak indicator of 3D reconstruction quality. I.e. the method can probably fit a good surface map (which should be easy given DINO features) without reconstructing the correct 3D shape. Hence we can only rely on visuals to evaluate the reconstruction quality.", " So there are 4 stages, where you gradually extend the range of tuneable parameters, and within each stage, you run this EM-style optimisation? That's much more clear now, thanks.", " Thank you for your response!\n\nSorry if I was not clear about positioning with respect to UMR. I acknowledge that you solve a different problem, which requires fitting an articulated mesh. However, the big part of the solution of your problem is the \"EM-like\" optimisation algorithm. Is there a crucial difference between using SCOPS and DINO features in this context? UMR also uses what can be described as an EM-like algorithm, with the averaged SCOPS features acting as latent variables. How specifically does your algorithm differ from it?", " ---\n**Missing references**\n\nWe thank the reviewer for the pointers to additional related works and we will add them to the manuscript.\n\n---\n**Comparison with BanMo’s semantic loss**\n\nBanMo learns a canonical feature embedding by enforcing the consistency between feature matching and geometric warping, which is made tractable by the dense temporal correspondence (optical flow) between video frames. The semantic loss in BanMo enforces the 2D-3D cycle consistency between 2D coordinates and canonical 3D points. Considering that our image ensemble is sparse and un-correlated, we coarsely initialize and regularize the 3D surface features at the part-level. The proposed semantic loss allows us to first localize the 3D parts then refine the detailed part shapes in the challenging setting.\n\n---\n**Comparison with additional baselines. Why not compare with CMR, CSM, U-CMR, VMR**\n\nPlease see **Comparison with additional baselines** in the General Response-1 above.\n\n---\n**Evaluations on CUB bird images**\n\nWe design LASSIE for articulated shapes and thus its advantage over prior mesh reconstruction methods is best shown in animals like zebras and horses. To demonstrate that LASSIE can also be applied to more compact shapes, we show some quantitative comparisons on CUB bird images in the table below and qualitative results in this [_anonymous link_](https://www.dropbox.com/s/ah0ge6br92y25mp/cub.png?dl=0). Specifically, we select 3 classes (Laysan Albatross, Mallard, and Painted Bunting) in the dataset and filter out the images with truncation or occlusions, resulting in roughly 30 images per class. As shown in the results, LASSIE produces slightly lower PCK but higher mask IOU compared to UMR.\n\n**Table-T4**: Quantitative comparison on CUB images. We perform optimization and evaluation on each class and report the average metrics.\n| Method | PCK\\@0.1 | Mask IOU |\n| :----------- | :-----------: | :-----------: |\n| UMR | 60.5 | 74.1 |\n| A-CSM | 45.1 | 76.6 |\n| LASSIE | 59.8 | 77.3 |\n\n---\n**Error bars**\n\nIn our optimization framework, the random seed initialization does not affect the outputs much since the most sensitive parameters (e.g. camera, bone scaling, pose) are not randomly initialized.\n\n---\n**Reduced number of images with more diverse poses**\n\nWe observe that the limiting factor of few-image optimization is camera viewpoint. That is, with enough diversity of the camera viewpoints in the image ensemble, LASSIE can produce reasonably good results in terms of part discovery and faithfulness to input images. Using images with more diverse poses can produce more realistic and connected part shapes when re-posed or animated.\n\n---\n**Manual annotations used**\n\nPlease see **Manual annotations needed. Is LASSIE self-supervised?** in the General Response-2 above.\n\n---\n**Clarification of optimization stages**\n\nPlease see **Clarification of LASSIE optimization** in the General Response-1 above.\n\n---\n**Other suggestions**\n\nThanks for the suggestions. We will incorporate them to improve the paper.\n", " ---\n**Comparison with UMR**\n\nDespite that LASSIE and UMR both utilize 2D semantic parts/clusters as supervisory signals, we use the semantic correspondence differently to deal with a fundamentally different problem which UMR cannot be trivially extended to. First, UMR depends on a pre-trained SCOPS model to define part-level correspondence between images and maintain a canonical UV map for semantic rendering, whereas we directly utilize the DINO features to define semantic consistency loss between pixels and 3D surface points. Second, UMR represents the shapes as a single mesh, which is more suitable for compact shapes and can only produce fixed-resolution outputs. Last but not least, UMR cannot perform part-based manipulation or animation given a target pose or motion since it does not model articulations. We also show some quantitative comparisons with UMR in Table-T1 in the General Response-1 above, demonstrating our advantage on articulated shapes like horse bodies. We will add this discussion to the manuscript.\n\n---\n**Is LASSIE self-supervised?**\n\nPlease see **Manual annotations needed. Is LASSIE self-supervised?** in the General Response-2 above.\n\n---\n**Clarification of optimization algorithm**\n\nPlease see **Clarification of LASSIE optimization** in the General Response-1 above.\n\n---\n**Justification of EM-style feature optimization**\n\nThis EM-style semantic and geometry optimization is crucial since it can progressively refine the 3D surface features given better pose and shape fitting, then in return provide more accurate surface-pixel correspondence. Detailed optimization pseudo-code is shown in the General Response-1 above. We justify it by an ablation study of using the original DINO features for pose and shape optimization (one step in EM). As shown in the table below, one step feature optimization leads to significantly worse results since the 3D surface features are only coarsely initialized by the DINO cluster-to-part mapping.\n\n**Table-T3**: Quantitative comparison with/without EM-style feature optimization.\n| Method | Horse | Zebra |\n| :----------- | :-----------: | :-----------: |\n| One step | 68.1 / 53.6 | 70.8 / 56.3 |\n| EM-style | 73.0 / 58.0 | 79.9 / 63.3 |\n\n---\n**Evaluation using 2D keypoint transfer**\n\nConsidering the lack of 3D annotated data for the animal classes of our interest, we follow the common practice in prior works (UMR, CSM, A-CSM, 3D Safari, etc) to evaluate 2D keypoint transfer accuracy. We aim to extend LASSIE to more diverse classes like human bodies in the future work.\n\n---\n**Chamfer distance in semantic loss**\n\nIn Eq. 3, a corresponding pair of 2D pixel and 3D surface point should be close in both the geometric image space and the semantic feature space. By considering both geometric distance and semantic distance, we can pull the 3D points closer to their corresponding pixels by minimizing the semantic loss (L227-229). Only minimizing the geometric distance can easily result in local minima where the output parts fit the overall silhouettes but not their corresponding semantic clusters.\n\n---\n**Clarification of evaluation protocol**\n\nThe 3D baselines like 3D Safari and A-CSM are trained on large-scale image datasets and evaluated on our image ensemble. LASSIE is optimized and evaluated on all available images in the ensemble (supplemental Table 2). We use the released models of 3D Safari (zebra) and A-CSM (horse, cow, sheep) to evaluate on the closest animal class (L280-282). We will clarify when the baselines are trained on a different class in the evaluation tables.\n\n---\n**Inference on novel images**\n\nPlease see **Inference on novel images** in the General Response-2 above.\n\n---\n**Part shape constraints**\n\nPlease see **Strong pose and shape regularizations** in the General Response-2 above.\n\n---\n**Pose prior and bone angle losses**\n\nPlease see **Strong pose and shape regularizations** in the General Response-2 above.\n\n---\n**Typo and grammar issues**\n\nThanks for the corrections. We will revise the paper accordingly.\n", " ---\n**Inference on novel images**\n\nPlease see **Inference on novel images** in the General Response-2 above.\n\n---\n**Unfair comparison with learning-based methods**\n\nWe are aware that LASSIE results are not directly comparable to the learning-based methods (L277-230) since we deal with a different problem setting. However, there exists no closer framework and it is non-trivial to extend these baselines to our test-time optimization scenario. Moreover, the test-time optimized results of 3D Safari or A-CSM are also not directly comparable to LASSIE since both methods utilize a shape model fitted on large-scale datasets.\n\n---\n**Part shape regularizations**\n\nPlease see **Strong pose and shape regularizations** in the General Response-2 above.\n\n---\n**Skeleton-based representation**\n\nWhile the 3D skeleton can be quite simple and generic (all quadrupeds share the same skeleton in our experiments), it provides a strong and crucial regularization for part transformations. Although cats are not included in our experiments, some qualitative results on tiger images are shown in manuscript Figure 3 and supplemental Figure 11, which are reasonably faithful to the inputs. We observe that LASSIE currently struggles with a) highly articulated parts like elephant trucks (supplemental Figure 12) and b) fluffy animals that appear with more instance variations and ambiguous articulations. In the future work, we hope to solve these issues by discovering 3D skeletons automatically and loosen the rigid part constraints with the help of implicit skeleton representation or temporal correspondence in videos.\n\n---\n**Bone rotation regularizations**\n\nPlease see **Strong pose and shape regularizations** in the General Response-2 above.\n\n---\n**Clarification of latent part code initialization**\n\nThe latent part codes are initialized as zero vectors and optimized for each part during full training (after VAE pretraining).\n\n---\n**Clarification of part shape visualization in Figure 2**\n\nThe output part primitives and deformations are in the canonical space indeed. We show the oriented parts in Figure 2 to indicate the correspondence between individual parts and overall reconstruction. We will revise the figure to avoid confusion.\n\n---\n**Clarification of optimization process**\n\nPlease see **Clarification of LASSIE optimization** in the General Response-1 above.\n\n---\n**Confusion between head and tail localization**\n\nSince our only image-level supervision is from DINO features, LASSIE can indeed estimate inaccurate camera viewpoints and fall into local minima if the features are noisy and not semantically clustered. There are 2 failure cases out of the 30 zebra images in our dataset. Note that zebra images have the most ambiguous feature clusters due to their texture, other classes in our experiments do not suffer from this issue (see supplemental Figures 10 and 11).\n\n---\n**Clarification of keypoint annotations**\n\nThe 2D keypoints in our self-collected image ensembles are manually annotated. For Pascal-part images, we automatically find the keypoints by calculating the centers/corners of ground-truth part masks.\n\n---\n**Typo and grammar issues**\n\nThanks for the corrections. We will revise the paper accordingly.\n", " ---\n**Comparison with additional baselines. Why 3D-Safari and A-CSM are selected instead of newer ones.**\n\nPlease see **Comparison with additional baselines** in the General Response-1 above.\n\n---\n**Typo and grammar issues**\n\nThanks for the corrections. We will revise the paper accordingly.\n", " ---\n**Inference on novel images (R2: LjEQ, R3: boNQ)**\n\nUnlike most prior mesh reconstruction methods, LASSIE deals with a NeRF-like optimization problem with sparse images, and further addresses some additional challenges like unknown cameras, diverse instances, articulations, etc. Therefore, our goal is to produce high-quality and faithful 3D shapes via test-time optimization on few images in-the-wild. \n\nTo perform inference on new images without fitting all parameters from scratch, one can fix the shared parameters (resting pose, part features, latent part codes, and part MLPs) optimized on the “training set”, and only optimize the instance-specific camera pose and articulation. In the table below, we show the quantitative comparison on our horse image ensemble. We randomly split the ensemble into training (20) and testing (10)s and report the average results on the testing set. We refer to the original framework as “full optimization”, and the new experiment “partial optimization”. As shown in the results, partial optimization leads to slightly lower PCK and Mask IOU but still performs favorably against prior methods.\n\n**Table-T2**: Full vs. partial optimization on horse test set images.\n| Method | PCK\\@0.1 | Mask IOU |\n| :----------- | :-----------: | :-----------: |\n| 3D Safari | 71.8 | 72.2 |\n| A-CSM | 69.3 | 72.5 |\n| Partial optimization | 72.0 | 80.1 |\n| Full optimization | 73.0 | 81.9 |\n\n---\n**Strong pose and shape regularizations (R2: LjEQ, R3: boNQ)**\n\nWe acknowledge the current limitations of our skeleton-based representation and strong regularizations. As the first attempt to address this novel and highly ill-posed problem, we find the proposed pose and shape constraints essential to produce good quality results and avoid unrealistic outputs.\n\n---\n+ **Pose and bone angle regularizations.**\nConsidering our challenging problem setting with sparse images, we find limiting the pose deviation from resting pose crucial to avoid unrealistic poses. Our bone angle loss, on the other hand, is generic to all quadrupeds in our experiments. Similar techniques have been commonly used in prior works (SMALR [1] and 3D Safari for animals, SMPLify [2] for human bodies), either with explicit pose constraints or GMM priors fitted on large datasets. To further alleviate the need of such pre-defined priors, we are working on more general priors like symmetry, gravity, part overlap, bone-axis rotation, etc. For instance, we can find the pitch-yaw-roll axes for each bone based on the initial skeleton and constrain roll-axis rotations to reduce ambiguity during shape optimization.\n\n---\n+ **Part shape constraints.**\nWhile the primitive prior MLP produces simple base shapes, individual Part MLPs enable detailed deformation from primitive-like bases. The connectivity between parts is guaranteed by scaling the part surfaces to fit the bone lengths (L142-144). To obtain higher quality shapes, one can perform optimization with high-resolution images and features or instance-specific shape fine-tuning, forming an important future work.\n\n[1] Zuffi et al.. \"Lions and tigers and bears: Capturing non-rigid, 3d, articulated shape from images.\" CVPR. 2018.\n\n[2] Bogo et al. \"Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image.\" ECCV, 2016.\n\n---\n**Manual annotations needed. Is LASSIE self-supervised? (R3: boNQ, R4: mH51)**\n\nThe proposed LASSIE framework is stated to be “self-supervised” as it does not require any image-level annotations like camera viewpoint, keypoints, or masks. We will clarify this in the manuscript to avoid confusion.\n", " We thank the reviewers for the constructive feedback. For the missing reference suggested, we will add the corresponding discussion and comparisons to the manuscript. We address the common concerns in the general responses (General Response-1, General Response-2) and specific comments in the individual response to each reviewer.\n\n---\n**Comparison with additional baselines (R1: rsf7, R4: mH51)**\n\nDue to the lack of prior works on our problem setting (sparse image optimization for articulated animal shapes), we mainly compare LASSIE with learning-based mesh reconstruction methods. Among these methods, we find 3D Safari and A-CSM most comparable to LASSIE since they also model articulation for animal classes of our interest. Most recent mesh reconstruction methods, on the other hand, are weaker baselines since they focus more on compact shapes like birds and cars. In the table below, we show additional quantitative comparisons with CSM, ShSMesh, and UMR on our horse image ensemble (from Pascal-part dataset) in terms of keypoint transfer accuracy (PCK\\@0.1) and overall mask IOU. These methods cannot handle articulations and thus perform worse on articulated animals like common quadrupeds. Other related methods suggested by the reviewers either do not release code/model (CMR, IMR, U-CMR) or assume different inputs (VMR).\n\n**Table-T1**: Quantitative comparison with additional baselines on horse images.\n| Method | PCK\\@0.1 | Mask IOU |\n| :----------- | :-----------: | :-----------: |\n| CSM | 50.3 | - |\n| ShSMesh | 51.3 | 53.6 |\n| UMR | 55.7 | 58.4 |\n| 3D Safari | 71.8 | 72.2 |\n| A-CSM | 69.3 | 72.5 |\n| LASSIE | 73.0 | 81.9 |\n\n---\n**Clarification of LASSIE optimization (R2: LjEQ, R3: boNQ, R4: mH51)**\n\nWe provide the pseudo-code below to clarify the optimization process. We will rephrase the alternative optimization to multi-stage optimization, which include 4 main stages: 1) camera, 2) camera and pose, 3) shape, and 4) all parameters. The parameters are optimized using the corresponding losses in each stage. In each iteration, we first update the semantic features of 3D surfaces, then use the updated features to update 3D surfaces, forming an EM-style optimization. The detailed process is shown in the pseudo-code.\n\n---\n---\n> **Parameters:** resting part rotations $\\bar{R}$, bone length scaling {$s_i$}, part rotation {$R^j$}, camera viewpoints {$\\pi^j$}, latent part codes {$e_i$}, part deformation MLPs {$\\mathcal{F}^\\Delta_i$} ($i$: part index, $j$ instance index).\n\n> **Losses:** mask IOU loss $L_{mask}$, semantic consistency loss $L_{sem}$, pose deviation loss $L_{pose}$, part angle prior $L_{ang}$, Laplacian regularization $L_{lap}$, surface normal loss $L_{norm}$.\n\n> **Multi-stage optimization:**\n> 1. Optimize {$\\pi^j$} using $L_{sem}$ until convergence\n> 2. Optimize {$\\pi^j$}, $\\bar{R}$, {$s_i$}, {$R^j$} using $L_{sem}$, $L_{mask}$, $L_{pose}$, $L_{ang}$ until convergence\n> 3. Optimize {$e_i$} and {$\\mathcal{F}^\\Delta_i$} using $L_{sem}$, $L_{mask}$, $L_{lap}$, $L_{norm}$ until convergence\n> 4. Optimize all parameters using all losses until convergence\n\n> **EM-style semantic and geometric optimization:**\n> * Repeat\n> - **E-step:** Update 3D surface features $Q$ by rendering the neural surfaces on each image, finding the nearest pixel for each 3D point, and averaging the corresponding image features.\n> - **M-step:** Optimize neural surfaces using the updated $Q$ in $\\mathcal{L}_{sem}$ (Eq. 3). Note that the M-step also involves updating other parameters with different losses depending on the optimization stage.\n> * Until end of optimization stage\n---\n---\n", " This paper studies self-supervised learning of articulated shapes under a very challenging setting where only a few (~30) in-the-wild images of an animal category are available. This setting is much more challenging than prior art as silhouettes and templates are not used and the number of images used for training is very few. To make this possible, the authors propose to model articulated objects with a 3D skeleton, and each part is modeled with a simple shape. The model is trained via analysis by synthesis, where DINO features are used to provide silhouette and 2D-3D semantic consistency. Experiments on Pascal Part and self-collected web images demonstrate that the proposed method LASSIE achieves considerably better 3D reconstructions and 2D&3D part discovery compared to baselines. Strengths:\n\n1. The proposed method LASSIE is novel and reasonable. The design of modeling each part using simple shape primitives effectively regularizes the solution space and makes the problem more tractable. The 2D-3D semantic consistency loss makes use of the semantically consistent DINO features to help discover 3D parts. The regularization and training procedure are reasonable. I believe these new insights and designs are valuable to the community. \n\n2. Thanks to the proposed method, it is the first time that the self-supervised articulated shape learning task can be addressed under such a challenging setting where only a few (~30) in-the-wild images are available. LASSIE also considerably outperforms previous methods, and has broad applications such as semantic part refinement, pose/texture transfer. \n\n3. The paper is well written. \n\n\nWeaknesses:\n\nThis paper is generally fine. The following weaknesses are not significant.\n\n1. In line 278, it is mentioned that there exists no closer framework other than 3D Safari [39] and A-CSM [17]. However, it seems to me that these are not the state-of-the-art method for self-supervised articulated shape learning. For example, the following papers are later than the two baselines. Please clarify why [39] and [17] are selected instead of these newer ones. \n[1] Tulsiani et al, Implicit Mesh Reconstruction from Unannotated Image Collections, NeurIPS2020. \n[2] Ye et al, Shelf-Supervised Mesh Prediction in the Wild, CVPR2021. \n[3] Li et al, Self-supervised Single-view 3D Reconstruction via Semantic Consistency, ECCV2020. \n\nTypos and grammar issues: \nLine 12: We -> we. \nLine 146: k denotes -> b denotes. \nLine 198: The use of ... -> Through the use of ... or By using ... \nLine 223: a image -> an image. \nLine 286: an object category is able to produce detailed shapes of animal bodies which -> an object category which is able to produce detailed shapes of animal bodies. \n Please clarify why [39] and [17] are selected instead of the newer ones as mentioned in weaknesses. Yes, the authors adequately addressed the limitations and potential negative societal impact.", " This paper presents an optimization-based methods that reconstructs 3D part based articulated shapes of animals from only a small collection of in-the-wild images (~30 instances) of different instances of an articulated object category, eg, horse, giraffe, elephant, penguin etc. The key idea is leverage self-supervised pretrained ViT features (DINO) to establish semantic correspondences between the images of different instances with various camera viewpoints, pose articulations, texture and environment. By constraining the 3D shapes with a set of pretrained part primitives (ellipse, cylinder, cone) on a pre-defined 3D skeleton (eg, quadrupedal or bipedal), the model is able to reconstruct compelling articulated 3D shapes of different instances, significantly better than existing learning-based models trained on only one specific category (zebra or horse). ## Strengths\n### S1 - Compelling results on an interesting and challenging task\n- The task of reconstructing articulated 3D animals from only a small collection of in-the-wild images of various articulated instances is highly ill-posed and very challenging. The proposed method seems to work well on a diverse set of examples (9 different animals demonstrated).\n- Based on the examples shown in Fig 9 in the supplementary material, the images in each category collection are also very diverse in terms of camera viewpoints, poses, and environment and illumination conditions.\n\n### S2 - Good ideas and careful implementations\n- There are two main ideas. One is to leverage self-supervised pretrained image features (DINO ViT) to establish semantic correspondences between different image instances via simple clustering. This is more powerful than previously studied SCOPS which still requires ImageNet pretraining and large training datasets.\n- The second idea is to use part-based 3D representation with a pre-defined generic 3D skeleton. This significantly reduces the complexity of this ill-posed task, and yet requires only minimal priors.\n- These two ideas are carefully implemented and validated through ablation studies. In particular, the use of an neural surface representation and VAE pretraining of the 3D parts seems quite effective in achieving more regularized shapes.\n\n### S3 - Good writing\n- The paper is very well written, with clear motivations, sufficient technical explanations and illustrative visualizations.\n\n\n## Weaknesses\n### W1 - Optimization-based method\n- The method optimizes over a small collection of images and does not generalize to new images out of the box. It would also be interesting to extend the idea to a learning-based pipeline that would allow inference on novel images.\n- This also makes the comparison against previous learning-based methods unfair, as the results of other methods (I assume) are obtained by one forward pass without any finetuning. It would be slightly fairer to also perform some test time finetuning on the other methods.\n\n### W2 - Heavy regularization\n- Due to the ill-posedness of the task, the model relies on a number of heavy constraints, including a simplistic part-based shape representation which also requires pretraining using a set of primitives. This largely limits the expressivity of the model and hence does not lead to fine geoemtric details.\n- The need of hand-crafted 3D skeleton is also a major limitation. Although it may seem effortless on one category, it is still quite cumbersome to scale up to all kinds of objects. Moreover, the model only works well on animals that have limited articulated poses, where the skeleton is clearly visible. I can imagine that it would still struggle for animals that highly articulated, like cats (in which case, only a small number of images may not be sufficient).\n- There are also a number of specific constraints in the model, eg, minimizing the sideway rotation angles on the leg bones, and a specific optimization procedure that alternates between viewpoints, bones, parts and features.\n\n\n### (Minor) Clarifications\n- Are the part prior latents $\\mathbf{e}_i$ randomly initialized and optimized for each part during full training (after VAE pretraining)?\n- I am confused by the visualization of the part primitives in Fig 2. Aren't the output of the neural surface networks $\\mathcal{F}_i$ supposed to be elongated shapes in the canonical space (without bone orientations)? Fig 2 seems to suggest that the part shapes predicted by the networks are already oriented.\n- Line 257: the optimization procedure seems to involve three steps: 1) camera viewpoints, 2) bone transformations, 3) latent part codes and part deformations. Is that correct? What about the 3D part features? When are they updated (the E step)?\n- It seems pretty easy for the model to confuse head and tail by looking at only the 4 clusters visualized in the zebra example in Fig 2. Since camera viewpoint is optimized in the first step, will it get confused initially and fall into local minima. And if so, how often does it occur?\n- Were the 2D keypoints used in keypoint transfer evaluation manually labelled?\n\n\n### (Minor) Typos\n- Line 146: I believe $k$ should be $b$.\n- Line 171: add $t^j \\in \\mathbb{R}^{b \\times 3}$ for completeness. Also, make the notations consistent -- $\\mathbf{t}$ vs. $t$.\n- Line 286: fix typos. Apart from a few questions that need clarifications listed above, the only additional result I might suggest is to also perform test-time optimization with previous methods for a slightly fairer comparison, maybe in the final version. The settings are different enough though, so this may not really affect the evaluation of the paper. The paper has included a discussion on the limitations of the model as well as illustrative examples in the supplementary material.", " The paper proposes a new task formulation in the area of non-rigid reconstruction and parts discovery. The model is fit to a small collection of images (25–30) of unrelated instances of a category (such as animal species). It enables discovering 3D pose and shape of objects in all images, along with part segmentation. No supervision on the level of images is required, however the users have to provide the topology and a typical pose of the skeleton, assignment of image parts to image features’ clusters, and also there are specific constraints on the movement of certain joints (such as legs in case of quadrupeds). The model produces better PCK numbers than competing methods (Articulated CSM and 3D Safari) and visually convincing reconstructions on the examples shown. Originality:\n* I see the method as an extension of UMR [19] to articulated shapes, which are naturally represented as a union of convex’ish primitives, where SCOPS features are upgraded to DINO. Extending to articulated shapes requires overcoming some technical challenges, such as adding regularisations and priors.\n\nClarity:\n* (+) In general, the paper is well written, and the method is mostly clear,\n* (−) some of the limitations are either not addressed or not mentioned in the introduction (such as the required category-level supervision), while l. 51 claims that the method is “completely self-supervised”,\n* (−) I am confused about the final optimisation algorithm; it runs block-coordinate optimisation (l. 256 and on), but then there is EM-like optimisation of features inside one of the steps; I think the paper would benefit from pseudo-code of the training loop,\n\nSignificance:\n* (+) getting rid of instance-level supervision is a great step; the method can scale to larger collections easily;\n* (+) the required prior information is indeed easier to obtain than e.g. category-specific mesh.\n\nExperiments:\n* (+) visual demonstrations look good;\n* (−) is the EM-like optimisation important? What would be the result if the original DINO features were used in (4)? (I think it corresponds to 1 iteration of “EM”);\n* (−) evaluating using 2D keypoint transfer may be used by other papers but it does not tell much about reconstruction quality; representing the animal’s shape with a flat surface would result in a relatively good PCK while such reconstruction is visually unacceptable. I suggest evaluating on datasets where 3D ground truth is available, such as synthetic datasets or human datasets (such as Human 3.6M) – without expectation of performing better than methods specialising on human reconstruction,\n* (+) Applications paragraph/results are nice.\n\n=============\n\nTypos:\n* line 80: In contrasted ← in contrast;\n* line 146: *k* ← *b*;\n* line 211–213: why is it called EM-like algorithm? I would call it block-coordinate descent; for EM, I would expect having an explicit likelihood function depending on latent random variables, and taking expectation over those at E-step,\n* line 285: “A-CSM assume high-quality skinned model”: IIRC, it only requires the mesh segmented to parts but no skinning weights, which is a much lighter assumption.\n* line 286: “animal bodies which”.\n\nUPD. The other reviewers pointed me towards the supplementary video that I had previously overlooked. It reassured me in the reasonable reconstruction quality, at least on the shown examples. I like the new problem setting and the method. My only remaining (but major) concern is the presentation and positioning of the method and the results: in the current manuscript, the relation to UMR (which is strong, at least on the ideas level) is essentially ignored; Experimental section wrongly claiming conclusive quantitative evaluation of 3D reconstruction; the limitations are not addressed, and the Introduction / abstract massively oversell the method (which it ironically does not even needs). I am changing my rating to borderline to not block the acceptance; I will thus leave it to ACs to decide whether we can accept the paper in this form. Major issues:\n* I suggest to allocate more text to position the method w.r.t. UMR [19] more clearly;\n* I suggest to evaluate reconstruction quality on datasets with 3D ground truth (see above);\n* can you clarify the comparison protocol with the baselines: are they evaluated on the hold-out evaluation set (vs. the proposed method that fits the model to all available images as far as I understood)? \n* please address the limitations below.\n\nOther issues:\n* In Chamfer loss (3), is the closest point chosen using the distance (4) rather than just geometric distance? If so, what is the motivation of that? I feel like the loss should enforce points to reproject in the correct locations; matching the features should not compensate for that.\n* in Experiments, baselines sometimes are trained using a different category; please indicate clearly in the results table where it is the case,\n The paper lists some limitations, but crucially misses some others:\n* Can the method generalise to new images, i.e. can it infer the pose for a new instance without re-fitting the model? The competing methods only need to run the forward pass. That may be important for interactive applications.\n* The method uses the same shapes of the individual parts across instances (ll. 172–174), so the resulting meshes differ only in the parts’ rigid transformations. This is an approximations since in reality individuals differ in the bone lengths and deformation of their flesh.\n* Another consequence of that is, if I understand correctly, reposing the shape can result in gaps between the parts. In fact, lack of those gaps is guaranteed for training instances only thanks to rendering losses; this means that the invisible surface may not be represented properly,\n* the loss L_pose penalises the difference from the rest pose; it is usually difficult to find the weight of such loss so that the method is able to fit extreme poses,\n* the need for heuristics like L_ang (5) limits the applicability of the method to new categories where users would need to figure out similar constrints.", " The submission propose Lassie to obtain articulated 3D reconstructions of a skeleton-based animal class (shown are quadrupeds and bipeds) from an image collection of 20-30 images of different individual animals. Given a fairly generic animal skeleton (16 vertices and 15 edges), it learns a 3D part for each edge/bone. The 3D part is represented by regressing 3D offsets for a 3D unit sphere via a coordinate-based MLP. The shapes of the part are shared across instances, only a similarity transform is applied based on the skeleton of each instance. The method requires only little outside knowledge: It clusters DINO-ViT [5] image features to obtain a very rough 2D part annotation of the images, and it requires manual matching between these part annotations and the skeleton parts. The optimization determines instance-specific skeleton parameters and class-specific part parameters. The main loss is a novel semantic matching loss that has some similarity to the one used in BanMo since it transfers image features to the 3D reconstruction and then tries to match the reconstruction to the images using those features. \n\nThere is no prior work that tackles this problem setting. Comparisons to A-CSM and 3D Safari show better results for Lassie, in several distinct settings. =====STRENGTHS=====\n\nThe problem setting is interesting and it is remarkable that the proposed method takes only 10 minutes on a consumer GPU to obtain quite decent articulated reconstructions from only 20-30 images of an animal class. This opens up the door for a lot of follow-up work and raises a bunch of interesting questions, although unfortunately code will not be released.\n\nWhile there are an MLP for the part shape and DINO image features, the majority of the method is not much concerned with neural networks, which is a nice change of pace.\n\nThe method is not exactly simple but also far from being convoluted. Its complexity seems appropriate to me and the design choices are argued for in the paper. \n\nThe evaluation is done well, including additional experiments in the supplement.\n\nThe paper is well written. The presentation is nice, including clear tables and figures.\n\nI appreciate the name of the method.\n\n=====WEAKNESSES=====\n\nThere are no significant weaknesses I can see. I do have a number of questions and suggestions though, see below.\n\nMinor writing issues:\n* L.12: \"We\" should be lower case\n* L.30: \"a\" -> \"the\"\n* L.146: \"k\" -> \"b\"\n* L.286: sentence ends in \"which\"\n\nRelated Work:\n\n* Missing related works that are from the same line of work as A-CSM [17]: Kanazawa et al. Learning category-specific mesh reconstruction from image collections (CMR; ECCV 2018) and its follow-up works, see e.g. Table 1 in Wu et al. DOVE: Learning Deformable 3D Objects by Watching Videos (arxiv preprint 2021):\n\n- Kulkarni et al. Canonical surface mapping via geometric cycle consistency (CSM; ICCV 2019), \n- Goel et al. Shape and viewpoints without keypoints (U-CMR; ECCV 2020), \n- Li et al. Self-supervised single-view 3D reconstruction via semantic consistency (UMR; ECCV 2020), \n- Li et al. Online adaptation for consistent mesh reconstruction in the wild (VMR; NeurIPS 2020),\n- Kokkinos et al. Learning monocular 3D reconstruction of articulated categories from motion (CVPR 2021),\n- Kokkinos et al. To The Point: Correspondence-driven monocular 3D category reconstruction (NeurIPS 2021)\n\n* Less crucial, several animal shape estimation methods that use SMAL or similar statistical models are not cited:\n\n- Biggs et al. Creatures great and SMAL: Recovering the shape and motion of animals from video (ACCV 2018)\n- Biggs et al. Who left the dogs out: 3D animal reconstruction with expectation maximization in the loop (ECCV 2020)\n- Badger et al. 3D bird reconstruction: a dataset, model, and shape recovery from a single view (ECCV 2020)\n- Wang et al. Birds of a feather: Capturing avian shape models from images (CVPR 2021)\n\n* The early important work of Cashman et al. What shape are dolphins? Building 3d morphable models from 2d images (TPAMI 2013) is also not cited.\n* Another work that reconstructs dynamic objects from in-the-wild image collections is Wu et al.'s Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild (CVPR 2020 best paper).\n* While NeRF and a couple static variants are mentioned, the NeRF field has also produced some works on dynamic reconstruction of general objects from video (not image collections; although see Banmo). These could be added for completion, see e.g. the recent survey Tewari et al. Advances in Neural Rendering (Eurographics 2022).\n\n\n---- post rebuttal\n\nMy rather minor concerns and questions have been addressed well. I increase my rating to accept. Questions\n\n* The semantic loss is interesting. It would be nice to see a discussion on how it differs from BanMo's loss (see summary above).\n* I would like the authors to comment on why comparisons against CMR, CSM, U-CMR, VMR, etc. (see Weaknesses) is not necessary, e.g. because there is simply no code. \n* (Almost?) all methods that follow CMR (reconstructing animals/dynamic objects from image ensembles) evaluate on the CUB-200-2011 birds dataset. Why is there no evaluation on birds? Is there too much variablility? A-CSM [17], against which the submission compares, also evaluates on this dataset.\n* The checklist states that error bars are provided. But I do not see any numbers or any other indication that multiple runs/seeds were used anywhere. Am I overlooking something or is this answer in the checklist wrong?\n\n* While not crucial, the ablation on the number of images per animal in supplemental Sec. 2.2 uses the zebra class, which seems like a quiet easy class (see e.g. Fig. 9 in the supplement; all zebras have a pretty similar pose). I am curious how well a reduced number of images would work for giraffes or kangaroos or maybe horses, especially when keeping diverse poses and appearances in that subset of images.\n\nSuggestions (do not need to be addressed in the rebuttal but should be considered for a revised version of the paper):\n\n* I am not really satisfied with saying that no annotations are used (e.g. L.166). The DINO features give rise to 2D part segmentation masks. DINO is self-supervised, but still. \n* While I find the evaluation sufficient already, results on humans would be interesting. \n* The conclusion claims that no human annotations (L.342f) is used, but that's not correct (L.231ff). Please make sure that claims about annotations are precise everywhere.\n* Please provide the mathematical formulation of L_lap and L_norm (and all other terms that might not be defined) in the supplementary material.\n* In Fig. 10 and 11 in the supplemental document, please make the novel views of the same size as the input views.\n* The current uniformly colored visualization of the parts shows coarse-level correspondences across instances. It could be interesting to color each part with some kind of pattern (e.g. a rainbow or color gradient) to visualize fine-level correspondences.\n* L.167f says that the instances can have \"varying texture (appearance) due to pose, camera angle, and lighting\". What about the individual identity, i.e. different individuals have different fur? The method seems to not have an issue with it, I think it should be added to that list.\n* L.256ff state that alternative optimization of camera, pose, and shape is used. Does that mean that the camera is optimized at the beginning of training and then never again? Or is it camera, then pose, then shape, then camera, then pose, then shape, etc.? If it's the former, maybe calling it training stages instead of alternative optimization would be more descriptive.\n* What's the value of m in L.215?\n* Please render the images in a higher resolution. They are quite pixelated already. The checklist states that potential negative societal impact is discussed. While I don't see any such potential negative impact, I don't believe that that is discussed anywhere in the paper, despite what the checklist says.\n\nLimitations are discussed, including failure cases in the supplement." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "YedAzEZVM8Md", "yHiCUhkT7lv", "9xPLXT1AhtJ", "5G9bqMqZbzA", "nips_2022_0TDki1mlcwz", "iWwjzdo9lCB", "dYUfJyHObmW", "1ylsrqlhqh", "Vdb6l9FlDql", "Vdb6l9FlDql", "Vdb6l9FlDql", "9czt0pTJERN", "ASzaLqRnCmD", "qTHSMJFN7Gh", "Ecz3KPbPn_O", "nips_2022_0TDki1mlcwz", "nips_2022_0TDki1mlcwz", "nips_2022_0TDki1mlcwz", "nips_2022_0TDki1mlcwz", "nips_2022_0TDki1mlcwz", "nips_2022_0TDki1mlcwz" ]
nips_2022_PO6cKxILdi
Bayesian Risk Markov Decision Processes
We consider finite-horizon Markov Decision Processes where parameters, such as transition probabilities, are unknown and estimated from data. The popular distributionally robust approach to addressing the parameter uncertainty can sometimes be overly conservative. In this paper, we propose a new formulation, Bayesian risk Markov decision process (BR-MDP), to address parameter uncertainty in MDPs, where a risk functional is applied in nested form to the expected total cost with respect to the Bayesian posterior distributions of the unknown parameters. The proposed formulation provides more flexible risk attitudes towards parameter uncertainty and takes into account the availability of data in future time stages. To solve the proposed formulation with the conditional value-at-risk (CVaR) risk functional, we propose an efficient approximation algorithm by deriving an analytical approximation of the value function and utilizing the convexity of CVaR. We demonstrate the empirical performance of the BR-MDP formulation and proposed algorithms on a gambler’s betting problem and an inventory control problem.
Accept
Motivated by the often overly conservative characteristics of distributionally robust MDPs, this paper employs (nested) Bayesian posterior distributions to model the uncertainty over MDP parameters. The programming solution is similar to belief state approximation methods for POMDPs. The experiments (after revision) seem to demonstrate the advantages of this approach. The reviewers believe the paper could be improved with better theoretical analyses and/or more compelling experiments (higher dimensional tasks in particular). The paper lacks a strong advocate among the reviewers, but their aggregate sentiment is that it is worth accepting to the conference unless there are other more deserving works that it would displace.
test
[ "3mIlAn1fPdS", "_0jWgzld7S", "D_Mb-LjQJ-", "TFGjEgW-av", "odOtcTwD7IE", "37iEV8Sxb2l", "TlNjywKErww3", "KcyoQ3qQQ1t", "umkwerzRnSn", "5ZWXgMvVB31", "VskWNCXhYS7", "TXFOPE_ukbR", "b3Q1oRqdXRY", "jJM7DkwOJCn", "0MPwW4GV8k7", "gYu3w85eC6U", "z4wv5X_K88O", "BY5TBbu5ypn", "EivJXf0zS42", "u53tsiI4tQY", "dust7sxSeA3", "6tKNg66mjLJ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply and acknowledgement! We look forward to your final recommendation!", " I want to thank the authors for their further clarifications over my follow-up comments. I believe I now have a better understanding of the merits of this submission, and I will discuss them with other reviewers before providing a final recommendation.", " We thank you for acknowledging our work and effort! We also appreciate that you raised your score. \n\nWe would like to emphasize again that the POMDP literature only considers alpha-function representation for the risk neutral case and does not generalize to the risk-averse setting. More specifically, the risk-neutral alpha-function representation makes use of the piecewise linearity of the optimal value function, which does not hold in the risk-averse setting. Moreover, the CVaR risk functional adds additional complexity to the alpha function approximation due to the optimization over $u_t$. \n\nWith all due respect, we do not think the restriction to finite-horizon MDPs is an issue for a paper (a general paper, not just our paper). As you probably know, the treatment of infinite-horizon MDPs is often different from finite-horizon MDPs. More specifically, an infinite-horizon MDP is essentially a fixed-point problem, which entails several classes of methods including value iteration, policy iteration, and linear programming methods. None of these methods are used for finite-horizon MDPs, although value iteration (for infinite horizon) is similar to dynamic programming (for finite horizon) in format. Classical books on MDPs, such as \"Dynamic Programming and Optimal Control\" by Dimitri Bertsekas, also treat finite-horizon and infinite-horizon MDPs in separate chapters. Infinite-horizon MDP is definitely an important class of problems, but there is nothing wrong for a paper to focus on finite-horizon MDPs, which are important problems as well and have wide applications.", " I thank the authors for the responses, especially their effort to explain the novelty of the work and how the model differs from existing works. I agree that the optimization models and methods are not as trivial as I thought. However, given all the existing works in the literature, I believe that the use of $\\alpha$ function and the development of the approximation procedure in the context is quite straightforward. Some issues still remain, e.g., finite-horizon MDP. I have raised my scores accordingly, but I still think that the paper is not ready for publication. ", " Thank you for your time reviewing our paper. We have done our best to address your concerns, summarized the rationale for doing so, and modified the paper accordingly. We just uploaded a new version that highlights the main changes in red. If you have any further questions, please feel free to leave us a message and we are happy to discuss it. Please note the discussion period closes soon, so we will greatly appreciate your feedback either in discussions or scores.", " Thank you very much for your valuable suggestion. We have marked the main changes in the revised version in red. Below are our responses to your latest comments and questions.\n\n-- **I got the point that the nested (time-consistent) form is analytically better than the static form. However, the nested form can only be solved approximately, whereas the static form is amenable for computation. Can we say that the approximated nested form is still analytically better than the static form? I believe this point would require further discussion, as it is crucial to motivate the approach in the first place.**\n\nThank you for your question. First, the static form can only be solved approximately as well, since it also involves additional continuous state. For example, [1] considers the static CVaR risk functional and only shows an approximation algorithm to the value iteration due to the introduction of an additional continuous state; [2] also optimizes a static CVaR risk functional over the total cost and proposes some approximation algorithm. Second, since the static form can only be solved approximately as well, it would be unfair to compare the *approximated* nested form with the *exact* static form, while it is fair to say the *exact* nested form is analytically better than the *exact* static form. \n\n-- **I have briefly looked at the updated numerical results: Even after filtering out the constant, it is still not clear to me how can we conclude that BR-MDP is significantly better. Can the authors better explain what to look for in the experimental results, and how they are supporting the theoretical claims in the paper?**\n\nThank you for your question. The main thing to look for in the experimental results is the balance of the mean and variance of the actual performance of the each formulation's solution: the smaller mean cost and the smaller variance, the better the approach. In a nutshell, BR-MDP achieves the best balance between the mean and the variance; whereas the nominal approach has a much larger variance (indicating no robustness, subject to distributional shift), and DR-MDP has 0 variance but worst mean (indicating although it is robust but overly conservative). We have revised the explanations in the numerical section to make it more clear. \n\nIn this latest revised version, we also made a slight change to the betting problem setting as follows: when betting one dollar, a win results in two dollars (previously a win was one dollar); everything else stays the same. This change enlarges the range of expect cost values, and hence makes the differences among different approaches more obvious. We report the new numerical result in Table 1 and Table 2 in this revised version. For example, when the data size is small ($N=5$), our proposed BR-MDP formulation provides more robustness (much smaller variance) than the nominal approach. This can be seen from variance $14.67$ of BR-MDP formulation versus $54.12$ of nominal approach when $\\theta^c=0.45$, and $15.05$ of BR-MDP formulation versus $46.92$ of nominal approach when $\\theta^c=0.55$. In terms of the mean cost, our BR-MDP formulation is also better compared to the nominal approach when the data size is small. This can be seen from mean $-7.83$ of BR-MDP formulation versus $-5.88$ of nominal approach when $\\theta^c=0.45$, and $-16.27$ of BR-MDP formulation versus $-15.85$ of nominal approach when $\\theta^c=0.55$. One of the reasons is the adaptivity to the data process of our BR-MDP formulation. When the data size is small, the nominal approach is subject to the parameter uncertainty, and the plugged-in MLE estimator may deviate from its true value a lot. On the other hand, our BR-MDP formulation takes into consideration the future data realization, and thus produce more consistent policy over different replications that also behaves well under the true model. Finally, compared to DR-MDP which only fixates on the worst-case scenario, our BR-MDP formulation is much less conservative. This can be seen from the much better mean cost of our BR-MDP formulation than the DR-MDP formulation. \n\n\n[1] Chow, Y., Tamar, A., Mannor S., and Pavone M., 2015. Risk-sensitive and Robust Decision-making: a CVaR Optimization Approach. Advances in Neural Information Processing Systems, 28.\n\n[2] Rigter, M., Lacerda, B. and Hawes, N., 2021. Risk-averse Bayes-adaptive Reinforcement Learning. Advances in Neural Information Processing Systems, 34, pp.1142-1154.", " First, I want to thank the authors for their thorough replies. I would suggest them to clearly mark the changes made to the manuscript (e.g., reporting the new parts in a different color) so that it will be easier for reviewers to go through them.\n\nI still have a couple of questions regarding the significance of time-consistency and the experimental results.\n\n1) I got the point that the nested (time-consistent) form is analytically better than the static form. However, the nested form can only be solved approximately, whereas the static form is amenable for computation. Can we say that the approximated nested form is still analytically better than the static form? I believe this point would require further discussion, as it is crucial to motivate the approach in the first place.\n\n2) I have briefly looked at the updated numerical results: Even after filtering out the $cT$ constant, it is still not clear to me how can we conclude that BR-MDP is significantly better. Can the authors better explain what to look for in the experimental results, and how they are supporting the theoretical claims in the paper?", " - **Can the authors better explain why the Eq. 7 is significantly easier to compute than the exact Eq. 6? I guess the main benefit is pulling the min out of the expectation, but I found this paragraph quite hard to process (especially lines 218-222 could be revised)**.\n\nThank you for your comment. Please see line 200-203 for a revised statement. For the exact alpha-function representation like in Eq. 6, to compute $\\alpha_t(s_t,\\theta)$, we need to find the minimizer $\\alpha_{t+1}^{\\*(\\xi)}$, which attains the minimum of $\\int_{\\Theta} \\alpha_{t+1}(s_{t+1}, \\theta) \\frac{\\mu_t(\\theta)f(\\xi;\\theta)}{\\int_{\\Theta}\\mu_t(\\theta)f(\\xi;\\theta)d\\theta}d\\theta$ for every realization of $\\xi$. We use superscript $\\xi$ to explicitly show that for each $\\xi$, we need to find a minimizer $\\alpha_{t+1}^{\\*(\\xi)}$ in set $\\Gamma_{t+1}$. Since for each $\\xi$, there are $|\\Gamma_{t+1}|$ candidates for $\\alpha_{t+1}^{*(\\xi)}$ (which means one has to search over those candidates to find the minimizer), there are a total of $|\\Gamma_{t+1}|^{|\\Xi|}$ candidates for the alpha function at time stage $t$. On the other hand, in Eq. 7, the minimum is outside the integral over $\\xi$, such that we do not need to find minimizer for each $\\xi$, thus we greatly reduce the number of candidates for the alpha functions. \n\n- **Theorem 3.4 characterizes the approximate value function as an upper bound of the exact value. Do the authors believe there is hope to assess guarantees on the gap between the two**.\n\nThank you for your comment. For the proposed alpha function approximation algorithm, there is no theoretical guarantee on the gap between the two. Similar to [6] and [7], we apply Jensen's inequality to exchange the order of minimum and integral, and the gap between the two will increase as the time horizon increases. Hence, this class of approaches are more suitable for problems with a small time horizon.\n\n- **The authors answered 'Yes' to the checklist question on the limitations of their work, but they did not provide motivation or context for their answer. Can the authors list the main limitations of their approach**.\n\nThank you for your comment. We have added the following discussion of limitations of our approach in the conclusion section in our revised version: \n \n\"One of the limitations of our work is the parametric assumption on the distribution of randomness. In the future work, we wish to extend the BR-MDP formulation to non-parametric Bayesian setting, and evaluate the performance of the proposed formulation and algorithm on real-world data sets in more challenging problems. In addition, the proposed alpha-function approximation algorithm provides an upper bound of the exact value, while there is no theoretical guarantee on the gap between the two. In future we will develop more efficient approximation algorithms with a convergence guarantee, such as methods based on stochastic dual dynamic programming. There are also other interesting directions, such as extending the BR-MDP formulation to an infinite horizon problem and utilizing function approximation to improve the scalability of the proposed approach to more complex domains.\"\n\n[6] Hauskrecht, M., 2000. Value-function Approximations for Partially Observable Markov Decision Processes. Journal of artificial intelligence research, 13:33–94.\n\n[7] Zhou, E., 2013. Optimal Stopping under Partial Observation: Near-value Iteration. IEEE Transactions on Automatic Control, 58(2):500–506.\n\n", " - **Chow et al., 2015 [1] make an interesting connection between cost-variability risk and epistemic risk. They prove that a solution that is sensitive to the cost-variability provides also robustness to the epistemic uncertainty as a by-product. Do the authors think that a similar case could be made in their setting, i.e., that a solution to the BR-MDP problem could provide some robustness to the cost-variability risk as well**.\n\nThank you for bringing up this interesting point. We do not have a definitive answer to this question, but our conjecture is that BR-MDP provides robustness in a similar sense as BRO (in static optimization), which balances trade-off between the posterior mean performance and the variability of actual performance of its solution. However, the DRO approach (proposed for hedging against the epistemic uncertainty in static optimization) has been show to provide robustness to the cost-variability risk when the ambiguity set is small, see [2][3]. So, DRO probably has a close connection with [1]. As far as we know, the interpretations of robustness in DRO or BRO have only been rigorously studied for static optimization, and it would be an interesting direction to study if these interpretations carry over to the dynamic problem. \n\n- **The proposed BR-MDP framework is similar in flavor to a Bayes-adaptive approach with a time consistent risk functional over the model parameters. Can the authors discuss the impact of the time consistency? Can they explain why an approximate solution to the BR-MDP problem is necessarily better than a (possibly exact) solution to the BAMDP problem with a risk functional applied on the full trajectory rather than in a nested form**.\n\nThank you for your comment. The impact of time-consistency has been discussed extensively in the literature (see [4], [5]). One of the considerations for time-consistency in this paper is its ``dynamic programming'' style property: for a chosen risk functional, if a policy is risk-optimal for an $T$-stage problem, then the component of the policy from the $t^{th}$ time until the end (where $t < T$) is also risk-optimal. The nested form can be show analytically better than the static form (i.e., the risk functional applied to the full trajectory); please see appendix for the proof on a three-stage problem, which can be extended to any number of stages. Here, we illustrate it with the betting problem considered in this paper. Consider a two-stage betting problem, where we are only given five past betting records with four wins and one loss, and the true winning rate is $10\\%$. The risk functional is chosen to be CVaR with $\\alpha=\\frac{1}{5}$. A win yields a cost of -2 and a loss yields a cost of 1. The gambler decides to bet or not for two runs. It can be easily checked that the optimal policy for the static formulation is always to bet in the two runs. The optimal policy for the nested formulation is to bet in the first run. If it turns out to be a win, then the gambler bets in the second, otherwise chooses not to bet. When evaluating the two policies on the true model (winning rate is $10\\%$), one could easily see the performance of the optimal policy in the nested formulation is better than that in the static formulation.\n\n- **Is the proposed approximate methodology really practical? Can the authors comment on how the Eq. 8 could be computed/estimated in more challenging domains**.\n\nThank you for your question. For more challenging domains, one could expect longer horizons, larger state space and action space, higher dimension of parameter space and randomness space, and thus high-dimensional integration may be involved. The proposed approximated methodology remains practical in this case. Note that the number of alpha-functions at each time stage is constant and equals the cardinality of the action space. For high-dimensional integration, one can resort to Monte Carlo integration which enjoys a convergence rate of $1/\\sqrt{\\text{sample size}}$ and in independent of the dimension. In the future work, we wish to utilize function approximation to improve the scalability of our approach to more complex domains. \n\n[2] Jun-ya Gotoh, Michael Jong Kim, and Andrew Lim. Robust Empirical Optimization is Almost\nthe Same as Mean-variance Optimization. Operations Research Letters, 46(4):448–452, 2018.\n\n[3] John C. Duchi, Peter W. Glynn, and Hongseok Namkoong. Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach. Mathematics of Operations Research,\n46(3):946–969, 2021.\n\n[4] Iancu, D., Petrik M., and Subramanian D., 2015. Tight Approximations of Dynamic Risk Measures. Mathematics of Operations Research, 40(3):655–682.\n\n[5] Shapiro, A., 2021. Tutorial on Risk Neutral, Distributionally Robust and Risk Averse Multistage Stochastic Programming. European Journal of Operational Research, 288(1), pp.1-13. \n\n", " Thank you very much for the valuable time you have spent reviewing our work. Below are our responses to the comments and questions you have.\n\n- **My main concern regards the empirical validation: It is ok for an essentially methodological/theoretical paper to have an empirical analysis in toy problems, but it should showcase the benefit brought by the method. Instead, the performance seems to be close to the baselines, especially to the naive maximum likelihood estimator**.\n\nThank you for raising this concern. We found we made a mistake in our previous implementation: we added a constant $c$ (which is an upper bound on stage-wise constant) to the cost at each time stage to ensure the stage-wise cost non-negative (as required by our algorithm), but forgot to subtract $cT$ (which is the total added constant over $T$ horizons) from the algorithm output to recover the total cost of the problem. As a result, the added constant $cT$ dominates the total cost and obscures the difference between different methods. After correcting this mistake, now we can see more difference between different formulations and also see the benefits of BR-MDP: it balances the trade-off between the mean and variance of the actual performance of its solution. In particular, we can see the nominal approach often results in the largest variance in actual performance, indicating it does not provide any robustness against distributional shift. On the hand, DR-MDP often results in the most conservative policy that has 0 variance in its actual performance but the mean cost is usually the worst. BR-MDP strikes the middle ground between these two and provide a good balance between mean performance and variability of the actual performance. For more details, please see the numerical section in the revised version for more details.\n\n- **The notation is quite convoluted and not always sharp. This undermines the clarity of some portions of the work**.\n\nThank you for your comment. The heavy notations are due to the complexity of the considered problem (risk-averse settings, CVaR, alpha function approximation etc.). In this revised version, we have tried to condense and streamline the notations as much as possible, and also provided a table summarizing main notations in the appendix for future reference.\n\n- **The work by Chow et al., 2015 [1] seems to be closely related to this submission. Although they tackle the cost-variability risk instead of the epistemic risk, they also consider a CVaR risk measure in a nested form to derive a Bellman equation (Sec. 3) that allows for approximate dynamic programming. Can the authors discuss the relation with this prior work, and especially the technical differences with respect to this submission**.\n\nThank you for your comment. There are two key differences between work [1] and our submission. First, [1] considers a standard MDPs with CVaR replacing expectation to account for the aleatoric uncertainty. Cost function and transition probabilities are both known in their case. We consider the case where we don't know those cost function and transition probabilities, and the CVaR risk functional is used to account for the epistemic uncertainty. Second, due to the unknown nature of the MDP problem in our setting, as we take an action and observe the state transition, we will update the posterior distribution on the unknown parameter. This Bayesian updating results in an exponential growth in the set of reachable augmented states, which prohibits the use of the value iteration approach proposed in [1].\n\n[1] Chow, Y., Tamar, A., Mannor S., and Pavone M., 2015. Risk-sensitive and Robust Decision-making: a\nCVaR Optimization Approach. Advances in Neural Information Processing Systems, 28.\n", " - **It seems to me that the low variation of performances of a model is always desirable**.\n\nThank you for your comment. A low variation of performance of a model is not always desirable if it sacrifices too much mean performance. In general, a robust approach strives for a good balance between the variation and the average of performance. For example, consider the following betting example: the true winning rate $\\theta^c=0.8$. If the gambler always chooses not to bet at any time stage (for example with distributionally robust formulation), the variation of the performance of this model is always 0. However, This model is not desirable as the gambler has a high winning rate and should choose to bet. As another example, consider a newsvendor problem. The newsvendor can choose nothing to sell, which means the cost is always 0. However, she may also have a high chance of making money if she chooses to order some newspaper to sell, when the distribution of the customer demand has a small probability mass on zero. As a result, the low variation of performances is not always desirable, and it is more important to balance the variation and mean performance. \n\n- **The advantage of BR-MDP over DR-MDP in Table 1 is too minor (59.64(1.42) versus 60.00(0.00)). Hence this may not be an ``evident illustration\"**.\n\nThank you for your comment. We found a mistake in our previous implementation in the betting problem: we added a constant $c$ (which is an upper bound on stage-wise cost) to the cost at each time stage to ensure the stage-wise cost non-negative (as required by our algorithm), but forgot to subtract $cT$ (which is the total added constant over $T$ horizons) from the algorithm output to recover the total cost of the problem. As a result, the added constant $cT$ dominates the total cost and obscures the difference between different formulations. After correcting this mistake, we can see more difference between BR-MDP and DR-MDP (see Table 1): for example, in the betting problem with true parameter $\\theta^c=0.55$, BR-MDP (exact, $\\alpha=0.4$) yields mean cost -2.28 and variance 4.28, while DR-MDP yields mean cost = 0.00, and variance = 0.00. In the inventory control problem, the advantage of BR-MDP (mean 81.63, variance 2.27) over DR-MDP (mean 99.77, variance 0.00) is also significant. \n\n- **To better illustrate the claim that ``the difference is small'', I would suggest the authors provide a table to report the relative gaps**.\n\nThank you for your valuable suggestion. We have reported the relative gap in the appendix in our revised version. \n\n- **Could the authors briefly clarify why the exact solution of CVaR BR-MDP is computationally available in the experiments (Since they state in the first paragraph of Section 4 that this exact solution is hard to obtain)**.\n\nThank you for your question. To obtain the \"exact\" (more precisely, should be close-to-exact) BR-MDP optimal value function, we discretize the continuous state, i.e., the posterior distribution, with small step size 0.1, which results in very large state space, and then we conduct dynamic programming on the discretized problem to obtain the optimal value function. This is a brute-force way to compute the \"exact\" value function, and that's why the computational time for the exact BR-MDP formulation is extremely large compared to the approximate formulation. We have clarified this in the revised version.\n\n- **Could the authors explain why the computation times of \"Nominal\" and \"DR-MDP\" are not reported in Table 1**.\n\nThank you for your question. In the revised version, we include the computational times of nominal and DR-MDP in Table 1. \n\n- **I would suggest the authors explain why the performances of DR-MDP are not reported in Figure 1a**.\n\nThank you for your suggestion. In the revised version, we have included the performance of DR-MDP in Figure 1a, which is a vertical line at 60.\n\n- **Possible typos: Line 121: Z should be X; Line 169: should it be $E_{\\xi_1}$; Line 205: should it be $V_{t+1}^{\\*}(s_{t+1},\\mu_{t+1})$; Line 211: range of t should be $0,1,\\cdots,T-1$; Line 234: $f(\\xi;\\theta)$ instead of $f(\\xi|\\theta)$ for consistency; Algorithm 2: the algorithm should have only one output; Table 1: the rows of the two tables should be aligned correspondingly**.\n\nThank you for catching the typos. Thank you for your corrections. We have corrected them in the revised version.\n\n[4] Poupart, P., Vlassis, N., Hoey, J. and Regan, K., 2006. An Analytic Solution to Discrete Bayesian Reinforcement Learning. In Proceedings of the 23rd international conference on Machine learning, pp. 697-704.\n\n[5] Shapiro, A., 2021. Tutorial on Risk Neutral, Distributionally Robust and Risk Averse Multistage Stochastic Programming. European Journal of Operational Research, 288(1), pp.1-13. \n", " - **What is the meaning of $\\Xi_t$. Is it a random variable or a set**.\n\nThank you for your question. We have condensed our notations and got rid of $\\Xi_t$ in the revised version. We meant by $\\Xi_t$ the set of randomness realizations that satisfy the state transition $s_{t+1}=g_{t}\\left(s_{t}, a_{t}, \\xi_{t}\\right)$. Please see the following statement in the revised version: \"The state equation together with the distribution of $\\xi_t$ uniquely determines the transition probability of the MDP, i.e., $\\mathcal{P}(s_{t+1}\\in S'|s_t,a_t)=\\mathbb{P}(\\{\\xi_t \\in \\Xi: g_t(s_t,a_t,\\xi_t)\\in S'\\}|s_t,a_t)$, where $S'$ is a measurable set in $\\mathcal{S}$.\"\n\n- **Line 164, page 4 (Section 2): by ``To illustrate'', I expect the authors to demonstrate the time consistency of the BR-MDP, but they just show that the objective value of BR-MDP serves as an upper bound for the objective of the one considering static risk functional. I would suggest the authors relate this to the concept of time consistency**.\n\nThank you for your valuable suggestion. The upper bound is used to show that the static risk functional always yields a higher total expected cost than the nested risk functional, illustrating the benefit of nested risk functional which originates from time consistency. But since it is confusing and also due to page limit, we have moved this to the Appendix and rewritten this part.\n\n- **Based on formula (2), should it be $\\mathcal{C}_1(s_1)$ instead of $\\mathcal{C}_1(s_1,a_1,\\xi_1)$**.\n\nThank you for pointing it out. Yes, you are right. The argument can be made by including one more stage, such that the last stage is $\\mathcal{C}_2(s_2)$. \n \n- **The definition of $\\Gamma_t$ is confusing. Does it imply that the cardinalities of $\\Gamma_0, \\Gamma_{1}, \\cdots$ are all $|\\mathcal{A}|$**.\n\nThank you for pointing it out. Yes, you are right that the cardinalities of $\\Gamma_0, \\Gamma_{1}, \\cdots$ are all $|\\mathcal{A}|$. We realize the confusion may arise when we show the difficulty of computing the set $\\Gamma_t$. Suppose we are given the set $\\Gamma_{t+1}$ and we want to compute $\\Gamma_{t}$. From the alpha-function representation in Proposition 4.1, we need to first determine the minimizer $\\alpha_{t+1}^{\\*(\\xi)}$, which attains the minimum of $\\int_{\\Theta} \\alpha_{t+1}(s_{t+1}, \\theta) \\frac{\\mu_t(\\theta)f(\\xi;\\theta)}{\\int_{\\Theta}\\mu_t(\\theta)f(\\xi;\\theta)d\\theta}d\\theta$. We use superscript $\\xi$ to explicitly show that for each $\\xi$, we need to find a minimizer $\\alpha_{t+1}^{\\*(\\xi)}$ in set $\\Gamma_{t+1}$. Since for each $\\xi$, there are $|\\Gamma_{t+1}|$ candidates for $\\alpha_{t+1}^{\\*(\\xi)}$ (which means one has to search over those candidates to find the minimizer), there are a total of $|\\Gamma_{t+1}|^{|\\Xi|}$ candidates for $\\int_{\\Xi}f(\\xi;\\theta)\\left(C_t(s_{t},a_{t},\\xi)+ \\min_{\\alpha_{t+1}} \\int_{\\Theta} \\alpha_{t+1}(s_{t+1}, \\theta) \\frac{\\mu_t(\\theta)f(\\xi;\\theta)}{\\int_{\\Theta}\\mu_t(\\theta)f(\\xi;\\theta)d\\theta}d\\theta \\right)d\\xi,$ and thus a total of $|\\mathcal{A}||\\Gamma_{t+1}|^{|\\Xi|}$ candidates for $\\Gamma_t$. Please see the revised version for more clear presentation.\n\n- **What is the value of $M_C$**.\n\nThank you for your question. $M_C$ is the maximum customer demand, which is explained right after the notation.\n\n[1] Guigues, V., Shapiro, A. and Cheng, Y., 2021. Risk-averse Stochastic Optimal Control: an Efficiently Computable Statistical Upper Bound. arXiv preprint arXiv:2112.09757.\n\n[2] Chang, H.S., Fu, M.C., Hu, J. and Marcus, S.I., 2005. An Adaptive Sampling Algorithm for Solving Markov Decision Processes. Operations Research, 53(1), pp.126-139.\n\n[3] Hazan, E., 2016. Introduction to Online Convex Optimization. Foundations and Trends in Optimization, 2(3-4), pp.157-325.\n\n", " Thank you very much for the valuable time you have spent reviewing our work. Below are our responses to the comments and questions you have.\n\n- **Please be very specific if the proposed framework only works with one-dimensional uncertainty, i.e., $\\theta$. If yes, please acknowledge this well and provide strong justifications**.\n\nThank you for your suggestion. The proposed formulation and algorithm work for multi-dimensional parameter space. We have added the following descriptions in Section 3 of the revised version to make it clear: $\\theta \\in \\Theta \\subset \\mathbb{R}^d$, where $d$ is the dimension of the parameter $\\theta$. \n\n- **I think several key assumptions for the whole framework should be listed more clearly and frankly. For example, when talking about the alpha-function representation (a major focus of this paper), the authors hide the critical assumption that \"the parameter space is a finite set\" quite \"deep\", which does not seem to be proper to me. For another example, for Theorem 4.3, the authors require joint convexity while not providing any justification (e.g., is it easy to satisfy)**.\n\nThank you for your valuable suggestion. To generalize our proposed algorithm to a continuous parameter space is possible, where one can replace the summation over $\\theta$ by integral. It does not pose a challenge to the technical proof, but it will bring computational difficulty in Bayesian updating of the posterior distribution since Bayesian updating on a general space often does not admit closed form. To overcome this computational difficulty, we can impose a conjugate assumption on the prior distribution and likelihood function such that the posterior distribution has closed form and can be computed easily. We have lifted the finite assumption on the parameter space and have added more discussion on the Bayesian updating. As for the joint convexity, we have added the following remarks following Theorem 4.3: ``\n\nThe jointly convex assumption in Theorem 4.3 is common for gradient-based algorithms for solving multi-stage decision making problems (e.g. [1]). It is satisfied in many real-world problems such as inventory control (e.g. [2]) and portfolio optimization (e.g. [3]) etc.''. \n \nWe have been more clear when stating the key assumptions in the revised version.\n\n- **I would encourage the authors to talk more about how they are motivated by ``POMDP'' and discuss the connections in more detail**.\n\nThank you for your valuable suggestion. The connection between BR-MDP and POMDP is motivated by the fact that the posterior distribution in BR-MDP is exactly like the belief state (which is the posterior distribution of the unobserved state given the history of observations) in POMDPs. However, after we did the work, we found [4] in the literature also reformulated a Bayes-adaptive MDP into a POMDP. We have discussed the connections in more details in the revised version.\n\n- **The results are not convincing enough (e.g., only one sample size is considered, and the advantage of the proposed model is not obvious**.\n\n- **The sizes of the historical data are fixed to be 10 in both experiments. I would suggest the authors vary the sample sizes to better examine the performances of the models**.\n\nThank you for your comment. Following your suggestion, we include more results by varying the data size $N=5,10,100$ in the gambler's betting problem, shown in Table 2 in the revised version. In particular, when the data size is very small ($N=5$), our proposed BR-MDP formulation provides more robustness than the nominal approach by having a much smaller standard deviation. It would be tempting to associate the good performance of our proposed formulation to much lower average cost compared to the nominal approach. However, this is an inappropriate interpretation. Note that our proposed BR-MDP framework tries to avoid a scenario where a solution performs well under the estimated model but performs badly under the true model, by possibly giving up some good expected performance and trading for more confidence about the actual performance of a solution. When the data size is very small, the MLE estimator in the nominal approach varies in each replication, which leads to large variance of the obtained policy over different replications. Instead, our BR-MDP formulation mitigates this parameter uncertainty (or model uncertainty, epistemic uncertainty etc.) and produces policy that is stable over different replications. Also note that DR-MDP is the most conservative formulation: for almost every dataset it is given, it produces the policy ``not to bet''. This conservativeness is not desirable if the true winning rate $\\theta^c>0.5$. In summary, the numerical results demonstrate that our BR-MDP formulation seeks a trade-off between the expected performance and the robustness in the actual performance.\n", " - **The paper only focuses on finite-horizon MDPs which makes it limited, as many applications will require infinite MDPs**.\n\nThank you for your comment. We admit that the algorithm proposed in this work only focuses on solving a finite-horizon MDPs. We do have some initial theoretical results for infinite-horizon MDPs (not included in this paper), but the development of algorithms will require very different machinery (completely different from the alpha-function representation used in the finite-horizon case) and probably warrants a separate paper. Therefore, we will leave the theoretical analysis and algorithm development for infinite-horizon case to a future work. \n\n- **Some assumptions are vague and require explanations, for example, the authors assume on Page 5 that $\\Theta$ is finite**.\n\nThank you for pointing it out. To generalize our proposed algorithm to a continuous parameter space is possible, where one can replace the summation over $\\theta$ by integral. It does not pose a challenge to the technical proof, but it will bring computational difficulty in Bayesian updating of the posterior distribution since Bayesian updating on a general space often does not admit closed form. To overcome this computational difficulty, we can impose a conjugate assumption on the prior distribution and likelihood function such that the posterior distribution has closed form and can be computed easily. We have lifted the finite assumption on the parameter space and have added more discussion on the Bayesian updating. \n\n- **The authors claim that their algorithm can be extended easily to other coherent risk measures. To support this point, it is better to provide formulations and results with other risk measures**.\n\nThank you for your valuable suggestion. Consider coherent risk measure based on KL divergence, as an example given in [9]. Here the risk functional takes the form $R_{\\epsilon}(Z)=\\inf_{\\gamma, \\lambda>0}\\{\\lambda \\epsilon+\\gamma+\\lambda e^{-\\gamma / \\lambda} \\mathbb{E}e^{Z / \\lambda}-\\lambda\\}$, where $\\epsilon$ is the user-defined ambiguity set size. Given $\\lambda>0$, the minimizer $\\gamma=\\lambda \\ln \\mathbb{E}\\left[e^{Z / \\lambda}\\right]$. We can apply the same technique to approximate the alpha functions for a given vector $\\lambda_0,\\cdots,\\lambda_{T-1}$, and then apply gradient descent on the approximate value function. Due to the page limit and the main focus on the CVaR risk measure, we have included more details in the revised supplementary material.\n\n- **Page 4, line 2 is unclear, what is $\\Xi_t$. It was not defined before**.\n\nThank you for your question. We have condensed our notations and got rid of $\\Xi_t$ in the revised version. We meant by $\\Xi_t$ the set of randomness realizations that satisfy the state transition $s_{t+1}=g_{t}\\left(s_{t}, a_{t}, \\xi_{t}\\right)$. Please see the following statement in the revised version: \"The state equation together with the distribution of $\\xi_t$ uniquely determines the transition probability of the MDP, i.e., $\\mathcal{P}(s_{t+1}\\in S'|s_t,a_t)=\\mathbb{P}(\\{\\xi_t \\in \\Xi: g_t(s_t,a_t,\\xi_t)\\in S'\\}|s_t,a_t)$, where $S'$ is a measurable set in $\\mathcal{S}$.''\n\n- **The pdf and the version in the supplement are different. Any explanation for this**.\n\nThank you for pointing it out. The pdf was part of the supplementary material. In the revised supplementary material, we only include the appendix.\n\n- **Why is the conclusion missing**.\n\nDue to the page limit, we intentionally left out the conclusion section in our current version. But since you think that is important, we have included the conclusion section in the revised version by shortening the introduction and leaving room for the conclusion. \n\n- **Possible typos**.\n\nThank you for catching the typos. Thank you for your correction. Yes, the RHS of equation (6) should be $V^*_{t+1}(s_{t+1},\\mu_{t+1})$. It is a typo and we have fixed it in the revised version.\n\n[4] Smallwood, R.D. and Sondik, E.J., 1973. The Optimal Control of Partially Observable Markov Processes over a Finite Horizon. Operations research, 21(5), pp.1071-1088.\n\n[5] Poupart, P., Vlassis, N., Hoey, J. and Regan, K., 2006. An Analytic Solution to Discrete Bayesian Reinforcement Learning. In Proceedings of the 23rd international conference on Machine learning, pp. 697-704.\n\n[6] Zhou, E., 2012. Optimal Stopping under Partial Observation: Near-value Iteration. IEEE Transactions on Automatic Control, 58(2), pp.500-506. \n\n[7] Rigter, M., Lacerda, B. and Hawes, N., 2021. Risk-averse Bayes-adaptive Reinforcement Learning. Advances in Neural Information Processing Systems, 34, pp.1142-1154.\n\n[8] Shapiro, A., 2021. Tutorial on Risk Neutral, Distributionally Robust and Risk Averse Multistage Stochastic Programming. European Journal of Operational Research, 288(1), pp.1-13. \n\n[9] Guigues, V., Shapiro, A. and Cheng, Y., 2021. Risk-averse Stochastic Optimal Control: an Efficiently Computable Statistical Upper Bound. arXiv preprint arXiv:2112.09757.\n", " - **The alpha-function representation is a direct result of equation (5) and some techniques in the Bayesian risk optimization literature**.\n\nThank you for your comment. With all due respect, we disagree with the reviewer's comment. First, the alpha-function representation has nothing to do with Bayesian risk optimization (BRO is only used in the problem formulation of BR-MDP and has nothing to do with the solution methods to BR-MDP). Second, we showed the exact alpha-function representation (in Sec. 3.1) and an approximate representation (in Sec. 3.2), both of which are not direct result or trivial extension of previous results. The exact alpha-function representation differs from the POMDP literature which only consider alpha-function representation for the risk neutral case ([4], [5]) and do not generalize to the risk-averse setting. More specifically, the risk-neutral alpha-function representation make use of the piecewise linearity of the optimal value function, which does not hold in the risk-averse setting. Moreover, the exact alpha-function, even in the risk-neutral case, suffers from the ``curse of time'' in the sense that the number of alpha functions grows exponentially over time. In the risk-averse setting, this difficulty is even more severe because the CVaR risk functional adds additional complexity to the alpha function approximation due to the optimization over $u_t$. To overcome this difficulty, we further develop an approximate alpha-function representation that keeps a constant small number of alpha-functions over time for a fixed $u$, combined with a gradient descent algorithm that optimizes over $u$ in the outer loop. \n\n- **The Bellman equation in (6) is just a standard Bellman equation with a continuous state space, for which several efficient (approximation) algorithms exist,but I do not see an explicit comparison of the proposed algorithm against existing ones**.\n\nThank you for your comment. The Bellman equation in equation (6) differs from the standard Bellman equation of MDPs in two aspects. First, we introduce the posterior distribution as an additional continuous state, while there is no such state in the standard Bellman equation of MDPs. While the posterior distribution can be theoretically regarded as just another continuous state, it is often infinite dimensional (for general distributions) or lives in a multi-dimensional simplex (for discrete distributions) and hence creates unique difficulty and opportunity for computation (e.g. the alpha-function representation we explored is unique for this type of continuous-state MDP but does not hold for general continuous-state MDPs). Second, we impose the risk functional CVaR to quantify the uncertainty brought by the unknown parameter and CVaR introduces an additional variable $u$ and adds another layer of optimization in the Bellman equation, while there is no quantification of parameter uncertainty in standard Bellman equations for MDPs.\n\nAs explained above, our Bellman equation is different from the usual Bellman equation, and hence the standard methods for solving continuous-state MDPs do not generalize easily to our problem. With that said, we did found some efficient approximation algorithms for such (continuous-state, risk-averse for parameter uncertainty) Bellman equations, and probably the closest work to ours in the literature is [7]. In that work, the authors proposes to solve a CVaR risk functional over the **total cost** and simultaneously address both epistemic and aleatoric uncertainty. However, their formulation, with a static risk measure, will lead to a time-inconsistent behavior, where the optimal policy at the current time stage can become suboptimal in the next time stage simply because a new piece of information is revealed (see [8]). This may not be problematic as their consider an **online** RL setting, where after making a decision, the agent can interact with the true environment, receive the corresponding reward and take the state transition. Whereas in this work, we consider an **offline** setting, where there is no interaction with the true environment when we make the decision. Directly comparing with their approach in our **offline** setting may not be a fair comparison. On the other hand, since we motivate the problem formulation from the conservative distributionally robust MDP (DR-MDP) formulation, we did include comparison with the DR-MDP in our numerical results. ", " Thank you very much for the valuable time you have spent reviewing our work. Below are our responses to the comments and questions you have.\n\n- **Robust MDP and Bayesian risk optimization formulations are widely studied and well understood. Thus, the results developed in the paper are quite trivial, given all the techniques we have in the literature**.\n\nThank you for your comment. With all due respect, we argue that Bayesian risk optimization has not been widely studied and has only studied in the limited setting of static (i.e., one-stage) optimization. In fact, Bayesian risk optimization, abbreviated as BRO, was proposed very recently and studied in only a handful of literature ([1], [2], [3]). BRO is a novel framework to replace the widely-used distributionally robust optimization (DRO) framework in static optimization, and has never been explored for multi-stage and dynamic settings. To the best of our knowledge, we are the first to extend the single-stage BRO formulation to the multistage setting, especially in the context of Markov decision process, optimal control, or reinforcement learning. While robust MDP is widely studied, we deviate from the mainstream literature of robust MDP in terms of formulation and the consequent solution method. The starting point of our paper is discussing the limitations of the mainstream robust MDP formulations (such as conservativeness and lack of time consistency; see the second paragraph of the introduction), and therefore, we propose a new formulation. This new formulation, which has a nested structure and uses an Bayesian approach, requires significantly different solution approaches from solving mainstream robust MDPs. Roughly speaking, mainstream robust MDPs require solving a mini-max problem in the Bellman equation, while our formulation only needs to solve a minimization problem but introduces an additional belief state that needs to be (Bayesian) updated over time stages. Therefore, the results in this paper, from formulation to solution methods, are not trivial at all. \n\n- **For example, Section 2(including Algorithm 1) only presents already known or trivial material**.\n\nThank you for your comment. Section 2 (now Section 3 in the revised version) presents some already known material to pave the way for introducing our new formulation, while the rest of Section 2 presents new and non-trivial materials. Specifically, Section 2 (Preliminaries and Problem Formulation) presents known results (preliminaries) in Sec. 2.1 to introduce necessary background for this paper and in part of Sec. 2.2 to introduce the definition of an MDP and its parameter uncertainty. We then propose a new formulation in the second half of Sec. 2.2 and show its Bellman optimality and time consistency in Sec. 2.3. Please note this formulation and result are not trivial, because the time consistency (and hence the existence of the Bellman optimality) is exactly what our new formulation differs from the mainstream robust MDPs. Algorithm 1 is a natural consequence of the Bellman equation, which is listed there as the \"idealized\" benchmark (as opposed to the \"practical\" solution developed in the following section). We have tried to clarify in the revised version. \n\n[1] Zhou, E. and Xie, W., 2015. Simulation Optimization When Facing Input Uncertainty. In 2015 Winter Simulation Conference, pp. 3714-3724.\n\n[2] Wu, D., Zhu, H. and Zhou, E., 2018. A Bayesian Risk Approach to Data-driven Stochastic Optimization: Formulations and asymptotics. SIAM Journal on Optimization, 28(2), pp.1588-1612.\n\n[3] Cakmak, S., Wu, D. and Zhou, E., 2021. Solving Bayesian Risk Optimization via Nested Stochastic Gradient Estimation. IISE Transactions, 53(10), pp.1081-1093.\n \n\n\n\n\n\n\n\n", " - **Does the proposed method scale when the parameter space is high-dimensional**.\n\nThank you for your question. First, the proposed formulation and algorithm work for a multi-dimensional parameter space. We will add the following descriptions in Section 3 of the revised version to make it clear: $\\theta \\in \\Theta \\subset \\mathbb{R}^d$, where $d$ is the dimension of the parameter $\\theta$. Second, the proposed algorithm can scale easily to a high-dimensional parameter space. A high-dimensional parameter space leads to high-dimensional integration in both Bayesian updating and dynamic programming. For high-dimensional integration, one can resort to Monte Carlo integration which enjoys a convergence rate of $1/\\sqrt{\\text{sample size}}$ and in independent of the dimension. \n\n- **Possible typos**.\n\nThank you for catching the typos. Yes, the RHS of equation (6) should be $V^*_{t+1}(s_{t+1},\\mu_{t+1})$; $f(\\xi;\\theta)$ and $f(\\xi|\\theta)$ mean the same thing. We have corrected them in the revised version.\n\n- **The authors did not addressed any limitations and potential negative societal impact of their work**.\n\nThank you for your comment. We report no negative societal impacts, including but not limited to potential malicious or unintended uses, environmental impact, fairness considerations, privacy considerations, or security considerations. Combined with your previous suggestion of adding the conclusion section, we have discussed the limitation of the work and future direction in the conclusion section in our revised version.", " Thank you very much for the valuable time you have spent reviewing our work. Below are our responses to the comments and questions you have.\n- **Condense introduction part (the current version may well suited for a journal paper; but for conference paper, the space of the introduction is a bit too much)**.\n- **The readability of the paper suffers from inconsistency and heavy notations**.\n- **Check the notations and remove redundant information. e.g. equation 4 appears in multiple places (line 177, line 213 (integral is replaced by sum)**.\n\nThank you for your comments and suggestions. The majority of the introduction focuses on motivating our problem formulation and has a long explanation of why we consider the Bayesian risk optimization with a time-consistent multistage formulation. We have moved parts of the comparison with existing literature on robust optimization after the introduction to keep it short, while maintaining a motivating and clear introduction.\n \nThe heavy notations are due to the complexity of the considered problem (risk-averse settings, CVaR, alpha function approximation etc.). In our revised version we have tried to condense and streamline the notations as much as possible, and also provided a table summarizing main notations in the appendix for future reference. \n \n- **The paper seems unfinished since it lacks a conclusion section**.\n\nThank you for your comment. Due to the page limit, we intentionally left out the conclusion section in our current version. But since you think that is important, we have included the conclusion section in the revised version by shortening the introduction and thus leaving room for the conclusion. \n \n- **The effectiveness of the paper is only evaluated on simple synthetic experiments**.\n\nThank you for your comment. Applying the proposed algorithms to more complicated domains with real-world data may be left as a future work. The focus of this work is on bringing out the novel problem formulation and designing a new alpha-function approximation approach to solve it efficiently. The two numerical examples we use come from [1] and [2], which were respectively published in NeurIPS 2021 and in a leading journal *Operations Research*. These examples allow better understanding of our results by comparing to the true optimal solutions (which can be obtained thanks to simple structure of these problems), to the empirical approach (from the frequentist perspective), and to the distributionally robust approach in the offline setting. From these comparisons, our method shows less conservativeness and demonstrates the benefit of considering future data realization in our proposed formulation.\n \n- **Section 4.2, The first paragraph is hard to follow, the rewriting of equation (6) is not clear**.\n\nThank you for your comment. Equation (6) defines the optimal value function with the augmented state $(s,\\mu)$. It essentially solves a minimization problem, where the minimization is taken with respect to action $a$ and variable $u$ used in the CVaR expression. Then we introduce the optimal $Q$ function, which is a function of augmented state $(s,\\mu)$ and also $a$ and $u$. The optimal value function is equivalent to the minimum of the optimal $Q$ function. This completely parallels the optimal value function and $Q$ function in the traditional MDP and RL literature. We have made it more clear in the revised version.\n \n- **Line 233, it says $V_t(s_t,\\mu_t)$ stands for the ``optimal'' value function, what does $V_t^{*}(s_t,\\mu_t)$ (equation (6)) stand for**.\n\nThank you for raising this question. Note that the true optimal value function $V^{*}_t(s_t,\\mu_t)$ defined in equation (6) is solved by minimizing over $a_t$ and $u_t$. The ``optimal'' value function $V_t(s_t,\\mu_t)$ is solved by minimizing over $a_t$ but with $u_t$ fixed. The idea is that we first conduct alpha function approximation with a fixed $u$ vector. In that case we can control the number of alpha functions. We have rewritten to make it clear in the revised version.\n\n[1] Rigter, M., Lacerda, B. and Hawes, N., 2021. Risk-averse Bayes-adaptive Reinforcement Learning. Advances in Neural Information Processing Systems, 34, pp.1142-1154.\n\n[2] Chang, H.S., Fu, M.C., Hu, J. and Marcus, S.I., 2005. An Adaptive Sampling Algorithm for Solving Markov Decision Processes. Operations Research, 53(1), pp.126-139.", " This paper provides a Bayesian risk Markov Decision Process (BR-MDP) formulation to address parameter uncertainty in MDP. \n- The paper uses a Bayesian posterior distribution as opposed to ambiguity set, and imposes a risk functional on the objective function with respect to the posterior distribution. \n- Specifically, the authors assume the distribution of the randomness in the system belongs to some parametric family, and model the uncertainty over the unknown parameters via Bayesian posterior distributions. \n- Furthermore, a CVaR risk functional taken with respect to the posterior distribution on the expected total cost is proposed in a nested form to promote time-consistency solutions.\n- Finally, the authors derived a dynamic programming solution with an augmented state that incorporates the posterior information and proposes an analytical approximate solution to BR-MDP. ## Strength\n- The idea of using Bayesian posterior to model the uncertainty in the parameters of MDP is intuitive and will improve the overly conservative problem in distributionally robust-MDP method. \n- Compressive theoretical analysis of the approximation are provided. \n- The proposed approximation method reduce the computation time significantly while not influencing the performance. \n\n\n## Weakness:\n- The readability of the paper suffers from inconsistency and heavy notations. \n- The paper seems unfinished since it lacks a conclusion section. \n- The effectiveness of the paper is only evaluated on simple synthetic experiments. - Section 3.2, The first paragraph is hard to follow, the rewriting of equation 6 is not clear. \n- Line 233, it says $V_t(s_t, \\mu_t)$ stands for the \"optimal\" value function, what does $V^*_t(s_t, \\mu_t)$ (equation 6) stand for? \n- Does the proposed method scale when the parameter space is high-dimensional? \n\n\nPossible typos: \n- Equation 6: rhs, $V_{t+1}^*(s_{t+1}, \\mu_t)$ should be $V_{t+1}^*(s_{t+1}, \\mu_{t+1})$\n- Line 234-235, both $f(\\epsilon; \\theta)$ and $f(\\epsilon|\\theta)$ appeared in this equation. Do they mean the same thing? The authors did not addressed any limitations and potential negative societal impact of their work. \n\n\n## Suggestions: \n### Several ways to improve the clarity of the paper: \n - Condense introduction part (the current version may well suited for a journal paper; but for conference paper, the space of the introduction is a bit too much) \n - Check the notations and remove redundant information. e.g. equation 4 appears in multiple places (line 177, line 213 (integral is replaced by \\Sum). \n\n\n\n", " The paper presents a Bayesian risk approach for finite-horizon MDPs under uncertainty. By using some techniques from Bayesian risk optimization, the authors develop an approximation algorithm to solve the MDP problem. The algorithm is then supported by numerical experiments based on two finite-horizon offline planning problems. The paper is well-written. Both robust MDP and Bayesian risk optimization are important areas in Machine Learning/Optimization. Thus, the problem considered is interesting and worth investigating. ## Strengths:\nThe formulation is new. The paper is technically sound and well written/organized. \nThe combination of MDP under uncertainty and Bayesian Risk Optimization seems interesting and promising. \n\n## Weaknesses:\n I believe the paper is not ready for publication due to several limitations as stated below:\n\nRobust MDP and Bayesian risk optimization formulations are widely studied and well understood. Thus, the results developed in the paper are quite **trivial**, given all the techniques we have in the literature. For example, Section 2 (including Algorithm1) only presents already known or trivial material. The alpha-function representation is also a direct result of equation (5) and some techniques in the Bayesian risk optimization literature. The Bellman equation in (6) is just a standard Bellman equation with a continuous state space, for which several efficient (approximation) algorithms exist, but I do not see an explicit comparison of the proposed algorithm against existing ones. \n\nThe paper only focuses on finite-horizon MPDs which makes it limited, as many applications will require infinite MDPs\n\nSome assumptions are vague and require explanations, for example, the authors assume on Page 5 that \\Theta is finite. It is true that we can always discretize a continuous space, but it raises several questions: How to discretize, how many discrete samples would be needed? approximation errors as a function of the discretization points...\n\nThe authors claim that their algorithm can be extended easily to other coherent risk measures. To support this point, it is better to provide formulations and results with other risk measures. - Page 4, line 2 is unclear, what is $\\Xi_t$. It was not defined before\n- The pdf and the version in the supplement are different. Any explanation for this? \n- Why is the conclusion missing?\n- In (6), both $V_{1}$ and $V_{t+1}$ on the both sides depend on $\\mu_t$. Should it be $\\mu_{t+1}$ on the right hand side? There would be no potential negative societal impact of this work.", " This paper proposes a Bayesian risk MDP called BR-MDP to account for both the uncertainties in transition probabilities and costs/rewards. The model takes a nested form of the risk functional, which endows its optimal policy with time consistency. The computational difficulties in the expectation and the risk functional, as well as the posterior with a possibly infinite dimension, are two main challenges in obtaining the solution of the BR-MDP. To overcome the latter, the authors focus on the conjugate families of distributions, while for the former, the authors consider the conditional value-at-risk and derive an $\\alpha$-function approximation of the optimal value function and propose an algorithm to solve for the approximate value function efficiently. Two empirical studies are carried out to demonstrate the performance of the proposed model and the efficiency of the proposed algorithm. Strengths: \nOverall, the paper is well-written. I especially appreciate the design of the approximate dynamic programming to overcome computational inefficiency. \n\nWeaknesses:\n1. Please be very specific if the proposed framework only works with one-dimensional uncertainty, i.e., $\\theta$. If yes, please acknowledge this well and provide strong justifications. \n2. I think several key assumptions for the whole framework should be listed more clearly and frankly. For example, when talking about the $\\alpha$-function representation (a major focus of this paper), the authors hide the critical assumption that \"the parameter space is a finite set\" quite \"deep\", which does not seem to be proper to me. For another example, for Theorem 3.2, the authors require joint convexity while not providing any justification (e.g., is it easy to satisfy). \n3. I would encourage the authors to talk more about how they are motivated by \"POMDP\" and discuss the connections in more detail.\n4. In the experiments, the results are not convincing enough (\\textit{e.g.}, only one sample size is considered, and the advantage of the proposed model is not obvious, \nBelow are some other comments for the paper.\n\n$\\bullet$ Line $138$, page $4$ (Section $2$): I wonder what is the meaning of $\\Xi_t$. Is it a random variable or a set?\n\n$\\bullet$ Line $164$, page $4$ (Section $2$): by \"To illustrate\", I expect the authors to demonstrate the time consistency of the BR-MDP, but they just show that the objective value of BR-MDP serves as an upper bound for the objective of the one considering static risk functional. I would suggest the authors relate this to the concept of time consistency.\n\n$\\bullet$ Line $166$, page $4$ (Section $2$): based on formula $(2)$, should it be $\\mathcal{C}_1(s_1)$ rather than $\\mathcal{C}_1(s_1,a_1,\\xi_1)$?\n\n$\\bullet$ Line $212$, page $6$ (Section $3$): the definition $\\Gamma_t=\\{\\alpha_t\\}_{a_t\\in\\mathcal{A}}$ is confusing. Does it imply that the cardinalities of $\\Gamma_0, \\Gamma_1,...$ are all $\\vert\\mathcal{A}\\vert$?\n\n$\\bullet$ Line $294$, page $8$ (Section $4$): what is the value of $M_C$?\n\n$\\bullet$ Lines $312$ to $313$, page $8$ (Section $4$): it seems to me that the low variation of performances of a model is always desirable.\n\n$\\bullet$ Lines $334$ to $337$, page $9$ (Section $4$): the advantage of BR-MDP over DR-MDP in Table $1$ is too minor (59.64(1.42) versus 60.00(0.00)). Hence this may not be an ``evident illustration\".\n\n$\\bullet$ Line $342$, page $9$ (Section $4$): to better illustrate the claim that \"the difference is small\", I would suggest the authors provide a table to report the relative gaps (in \\%) between $\\tilde{V}^*_0(s_0,\\mu_0)$ and $V^*_0(s_0,\\mu_0)$.\n\n$\\bullet$ Could the authors briefly clarify why the exact solution of CVaR BR-MDP is computationally available in the experiments (Since they state in the first paragraph of Section $3$ that this exact solution is hard to obtain)?\n\n$\\bullet$ The sizes of the historical data are fixed to be $10$ in both experiments. I would suggest the authors vary the sample sizes to better examine the performances of the models.\n\n$\\bullet$ Could the authors explain why the computation times of \"Nominal\" and \"DR-MDP\" are not reported in Table $1$?\n\n$\\bullet$ I would suggest the authors explain why the performances of DR-MDP are not reported in Figure $1{\\rm a}$.\n\n$\\bullet$ Line $121$, page $3$ (Section $2$): \"$Z$\" should be \"$X$\"?\n\n$\\bullet$ Lines $169$ to $170$, page $4$ (Section $2$): the right-hand side of the inequality, should it be $\\mathbb{E}_{\\xi_1}$ rather than $\\mathbb{E}_{\\xi_1\\vert\\xi_0}$ (based on the independence of $\\xi_t$'s described in Lines $135$ to $136$)?\n\n$\\bullet$ Lines $205$ to $206$, page $5$ (Section $2$): should it be \"...$V_{t+1}^*(s_{t+1},\\mu_{t+1})...$\"?\n\n$\\bullet$ Lines $211$ to $213$, page $6$ (Section $3$): the statement should be: \"$V_t^*=...$, where $\\alpha_t(s_t,\\theta)=...$\". Also, what is the range of $t$ for the equations to hold?\n\n$\\bullet$ Lines $234$ to $235$, page $6$ (Section $3$): $f(\\xi\\vert\\theta)$ should be $f(\\xi;\\theta)$ for consistency.\n\n$\\bullet$ Algorithm $2$, page $7$: the algorithm should have only one \"\\textbf{output}\".\n\n$\\bullet$ Table $1$, page $9$ (Section $4$): the rows of the two tables should be aligned correspondingly.\n\n$\\bullet$ Figure $1$, page $9$ (Section $4$): it might be better to divide Figure $1$ into two figures.\n\n Yes", " This paper proposes the Bayesian Risk MDP (BR-MDP) framework to deal with the epistemic uncertainty in sequential decision problems. In the introduced framework, a coherent risk functional is applied to the expectation of the total cost under the posterior distribution of the model parameters. Differently from prior works, the risk functional is applied in a nested form, which preserves the time consistency of the objective and allows to write a dynamic programming equation of the optimal value function. However, solving the dynamic programming problem analytically is far-fetched. The paper thus introduces an approximate algorithm, partially inspired by POMDP methods, for a specific instance of the BR-MDP with the CVaR risk functional. Finally, the given methodology is empirically evaluated in toy domains. Strengths\n- (Relevant Problem) This paper tackles the relevant problem of robust decision-making under uncertainty on the model parameters.\n- (Framework) I am not particularly familiar with the related literature, but to my understanding this is the first Bayes-adaptive framework that incorporates a time consistent risk measure over the epistemic uncertainty.\n- (Methodology) Since solving the BR-MDP problem is intractable in general, the paper proposes an interesting value function approximation method that is partially inspired by POMDP literature.\n\nWeaknesses\n- (Weak Empirical Analysis) The empirical analysis does not fully motivate the framework and the methodology, as the performance of the proposed approach does not seem to be significantly superior than the (not necessarily strong) baselines.\n- (Notation and Clarity) The notation is quite convoluted and not always sharp. This undermines the clarity of some portions of the work.\n\nThe premise of incorporating the sensitivity to the epistemic risk in a Bayes-adaptive framework is interesting, and also a natural continuation of the robust decision-making stream of works. The technical contribution does not seem to be ground-breaking, as most of the ideas can be traced back to previous works in cost-variability sensitivity and POMDPs, but this would be totally fine given the other contributions, i.e., the methodology and the framework itself. My main concern regards the empirical validation: It is ok for an essentially methodological/theoretical paper to have an empirical analysis in toy problems, but it should showcase the benefit brought by the method. Instead, the performance seems to be close to the baselines, especially to the naïve maximum likelihood estimator.\n\nFor the aforementioned reasons, I am still unsure on the significance of the work, and I am providing a slightly negative evaluation. However, I am open to increase my score if the authors can address my concerns on the motivation and potential of the proposed approach in their rebuttal.\n\n---\nAFTER DISCUSSION\n\nI have increased my score to weak accept to acknowledge the value of this work that emerged from discussion with the authors and other reviewers. However, I believe this paper is still missing either a stronger theoretical result on the benefit of the nested form or a stronger empirical evaluation in more challenging domains to be outstanding in any way.\n 1) The work by Chow et al., 2015 [22] seems to be closely related to this submission. Although they tackle the cost-variability risk instead of the epistemic risk, they also consider a CVaR risk measure in a nested form to derive a Bellman equation (Sec. 3) that allows for approximate dynamic programming. Can the authors discuss the relation with this prior work, and especially the technical differences with respect to this submission?\n\n2) Chow et al., 2015 [22] make an interesting connection between cost-variability risk and epistemic risk. They prove that a solution that is sensitive to the cost-variability provides also robustness to the epistemic uncertainty as a by-product. Do the authors think that a similar case could be made in their setting, i.e., that a solution to the BR-MDP problem could provide some robustness to the cost-variability risk as well?\n\n3) The proposed BR-MDP framework is similar in flavor to a Bayes-adaptive approach with a time consistent risk functional over the model parameters. Can the authors discuss the impact of the time consistency? Can they explain why an approximate solution to the BR-MDP problem is necessarily better than a (possibly exact) solution to the BAMDP problem with a risk functional applied on the full trajectory rather than in a nested form?\n\n4) Is the proposed approximate methodology really practical? Can the authors comment on how the Eq. 8 could be computed/estimated in more challenging domains?\n\n5) Can the authors better explain why the Eq. 7 is significantly easier to compute than the exact Eq. 6? I guess the main benefit is pulling the min out of the expectation, but I found this paragraph quite hard to process (especially lines 218-222 could be revised).\n\n6) Theorem 3.4 characterizes the approximate value function as an upper bound of the exact value. Do the authors believe there is hope to assess guarantees on the gap between the two? I think that a discussion on the negative societal impact can be avoided in this paper. However, the authors answered 'Yes' to the checklist question on the limitations of their work, but they did not provide motivation or context for their answer. Can the authors list the main limitations of their approach?" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "_0jWgzld7S", "37iEV8Sxb2l", "TFGjEgW-av", "gYu3w85eC6U", "nips_2022_PO6cKxILdi", "TlNjywKErww3", "KcyoQ3qQQ1t", "6tKNg66mjLJ", "6tKNg66mjLJ", "6tKNg66mjLJ", "dust7sxSeA3", "dust7sxSeA3", "dust7sxSeA3", "u53tsiI4tQY", "u53tsiI4tQY", "u53tsiI4tQY", "EivJXf0zS42", "EivJXf0zS42", "nips_2022_PO6cKxILdi", "nips_2022_PO6cKxILdi", "nips_2022_PO6cKxILdi", "nips_2022_PO6cKxILdi" ]
nips_2022_16nVkS8Twxo
Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization
Variance reduction techniques such as SPIDER/SARAH/STORM have been extensively studied to improve the convergence rates of stochastic non-convex optimization, which usually maintain and update a sequence of estimators for a single function across iterations. What if we need to track multiple functional mappings across iterations but only with access to stochastic samples of $\mathcal{O}(1)$ functional mappings at each iteration? There is an important application in solving an emerging family of coupled compositional optimization problems in the form of $\sum_{i=1}^m f_i(g_i(\mathbf{w}))$, where $g_i$ is accessible through a stochastic oracle. The key issue is to track and estimate a sequence of $\mathbf g(\mathbf{w})=(g_1(\mathbf{w}), \ldots, g_m(\mathbf{w}))$ across iterations, where $\mathbf g(\mathbf{w})$ has $m$ blocks and it is only allowed to probe $\mathcal{O}(1)$ blocks to attain their stochastic values and Jacobians. To improve the complexity for solving these problems, we propose a novel stochastic method named Multi-block-Single-probe Variance Reduced (MSVR) estimator to track the sequence of $\mathbf g(\mathbf{w})$. It is inspired by STORM but introduces a customized error correction term to alleviate the noise not only in stochastic samples for the selected blocks but also in those blocks that are not sampled. With the help of the MSVR estimator, we develop several algorithms for solving the aforementioned compositional problems with improved complexities across a spectrum of settings with non-convex/convex/strongly convex/Polyak-{\L}ojasiewicz (PL) objectives. Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on the strong convexity parameter. Empirical studies on multi-task deep AUC maximization demonstrate the better performance of using the new estimator.
Accept
The paper makes a nice contribution to the growing field of stochastic compositional optimization. In particular, it considers the case of coupled compositional problems and provides an algorithm that tracks all the inner-level objective information required in an efficient manner. Sample complexities (which are intuitively, optimal) are established. The authors **must** emphasize that they work under the stronger Assumption 3 in the revision.
train
[ "cP4gjWLE1fo", "Aklc2meAJW", "0mbPb2XjRaO", "xQsOB5sn_T", "LbQ6Ba5iAm8", "8IrmRA4hzWs", "cG-6bcvTPXj", "Msems-vxL5x", "_c-0v9zaNwY", "6QICfl0PKE9" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Authors, \n\nThanks for your detailed comments. It makes more sense now. ", " Dear authors,\n\nThank you for the response! It clearly addressed all my concerns.", " Thank you very much for your constructive comments and suggestions! We will revise accordingly.\n\n---\n\nQ1: It would be much more clear if the authors can elaborate more on their contributions and distinguish them from other existing works on variance reduction and block coordinate updates.\n\nA1: Your vision in terms of block coordinate updates is relevant here. Indeed, Wang and Yang [2022] have explained their tracking of $g=(g_1, \\ldots, g_m)$ as stochastic block coordinate updates. In particular, their SOX algorithm views their moving average update, i.e., \n$$\n\\mathbf u_t^i=(1-\\beta) \\mathbf u_{t-1}^i + \\beta g_i(\\mathbf w_t; \\xi^i_t), i\\in\\mathcal B_1^t\n$$\n\nas stochastic block coordinate update for the (dynamic) objective $ g_t(\\mathbf u)=\\sum_{i=1}^m\\\\|\\mathbf{u}^i - g_i(\\mathbf w_t)\\\\|^2/2$. From this perspective, our estimator MSVR can be viewed as applying a momentum-based stochastic block coordinate update for the same objective, with the update \n\n$$\n q^i_t = \\nabla_i g_t(\\mathbf u_{t-1};\\xi^i_t) + \\theta_t (\\nabla_i g_t(\\mathbf u_{t-1};\\xi^i_t) - \\nabla_i g_{t-1}(\\mathbf u_{t-1};\\xi^i_{t})), \\quad \\mathbf u^i_t = \\mathbf u^i_{t-1} - \\beta_t q^i_t\n$$\n\nwhere $\\nabla_i g_t(\\mathbf u_{t-1};\\xi^i_t)=\\mathbf u^i_{t-1} - g_i(\\mathbf w_t; \\xi_t^i)$ and $\\theta_t = \\gamma_t/\\beta_t$. The second term in $q^i_t$ is a momentum term, which is an additional term compared with that of SOX update for $\\mathbf u^i_t$. \n\nHowever, to the best of our knowledge, there is no prior work analyzing the above momentum-based stochastic block coordinate update. Indeed, our goal is not to optimize $ g_t(\\mathbf u)$. Instead we aim to bound $\\sum_{t=1}^T\\\\|\\mathbf u_t^i - g_i(\\mathbf w_t)\\\\|^2$ for a sequence of $\\mathbf w_{1}, \\ldots, \\mathbf w_T$. Hence, existing methods and analysis on variance reduction and block coordinate updates that focus on optimizing a given fixed objective cannot be applied here. In another word, our analysis and its synthesis with the update for FCCO is novel.\n\n---\n\nQ2: The paper should include more experimental results to demonstrate the dependency of $\\epsilon$, $n$, $B_1$, $B_2$ instead of simply presenting results when fixing $B_1$, $B_2$.\n\nA2: We have added more ablation studies by fixing $B_1$ and varying $B_2$, and fixing $B_2$ and varying $B_1$. The results are included in the revision (see Section C in the supplement), which are consistent with our theory, i.e., the larger $B_1$ ($B_2$) the faster the convergence (in terms of iteration complexities). We also plan to report more experimental results on large-scale data set. \n\n---\n\nQ3: Line 182: How to ensure that the linearized update in Eq. (6) can obtain an $\\mathbf{u}_t^i$ within the range of ${g}_i$? For example, if ${g}_i$ is nonnegative given its structure, the linearized update may obtain a negative $\\mathbf{u}_t^i$. Would this be a significant issue?\n\nA3: In this paper, we do not restrict the input domain of $f$ or range of $g_i$ for simplicity. If there is a constraint on the range of $g_i$ or input domain of $f$, we can add a projection to project the linearized update into the range of $g_i$, which does not affect our analysis of Lemma 2. Thank you for pointing this out. We have clarified this point in the revision (see remark under Lemma 2). \n\n---\n\nQ4: Other few minor comments for this paper.\n\nA4: Thank you for your constructive suggestions. We have revised them accordingly.", " Thank you very much for your constructive comments and suggestions!\n\nWe have added more ablation studies in the revision including using different networks and varying $B_1$ and $B_2$ (see Section C in the supplement) and plan to report more experimental results on large-scale data set. We focus on comparing with SOX since it is state of the art for solving the FCCO problem and it has been compared with other baselines (e.g., BSGD) with better performance observed. Below, we address your questions.\n\n---\n\nQ1: In line 55, the authors claim that “However, this simple change does not improve the complexities over that obtained by Wang and Yang [2022]”. Do you mean that with Eq.4 the complexity is the same complexities as Wang and Yang [2022]?\n\nA1: Yes, we mean by simply using Eq.4, the complexity is on the same order as Wang and Yang [2022], e.g., $\\mathcal{O}(m \\epsilon^{-4})$ for general smooth case, which is worse than the proposed methods.\n\n---\n\nQ2: I am confused with the last column of Table 1. Does it denote the ratio between $B_1$ and $B_2$. For BSGD and BSpiderBoost, there are two Big O notations, but for other methods, there is only one Big O notation. What is the exact meaning?\n\nA2: Sorry for using this misleading notation. $B_1$/$B_2$ means $B_1$ and $B_2$. When $B_1$ and $B_2$ are on the same order, e.g. $\\mathcal{O}(1)$ for SOX and our method, we only give one Big O notation. We have made them clear in the revised version by using $B_1, B_2$ and stating the order of $B_1$ and $B_2$ explicitly for all methods. \n\n---\n\nQ3: After presenting the main theorems, the authors should discuss how to set the values of $B_1$ and $B_2$.\n\nA3: Thank you! We have added more discussions in the remark of Theorem 1 in the revision (highlighted in the revision). ", " Thank you very much for your constructive comments!\n\n---\n\nQ1: The proofs in the supplementary are not very clear-written and more explanations about each lemma are recommended.\n\nA1: Thank you for your suggestion. We will give more explanations for each lemma in the supplement. \n\n---\n\nQ2: Could the analysis also be used in contrastive learning, whose loss function can also be written in the form of FCCO?\n\nA2: Yes, it is possible. One could follow the recent work [Yuan et al., 2022] to extend our analysis to self-supervised contrastive learning.\n\nReference: Yuan et al. Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance. ICML, 2022. \n\n---\n\nQ3: For the inequality in Lemma 1 on page 5 line 174, should it be $\\beta_{t+1}$ rather than $\\beta_t$\n\nA3: Yes, thank you for catching this typo. We have changed the lemma to make it correct (using $\\\\| \\mathbf u_{t}-g(\\mathbf w_t) \\\\|^2$ in the left-hand side) in the revised version.", " Thank you for the review! \nWe would like to clarify that although the MSVR estimator is incremental to STORM on the surface, our analysis is novel and the improvements for the FCCO problem are significant compared with state of the art results. Since we focus more on theoretical analysis covering non-convex, convex, strongly convex and PL objectives, our experiments have been focused on one application with six benchmark datasets. We have added more ablation studies in the revision (See Section C in the supplement). We will consider more applications in the long version of the paper. Below, we address your questions.\n\n---\n\nQ1: When considering the $\\mu$-PL condition, do we need the convexity condition?\n\nA1: No, we do not need the convexity condition when considering the $\\mu$-PL condition.\n\n---\n\nQ2: The authors introduce the concept of PL condition on Page 7, but use strong convexity in other places.\n\nA2: Since the PL condition is weaker than strong convexity, so the results for the PL condition are directly applicable to strong convexity (Note that if a function is $\\mu$-strong convex, it would also satisfy the $\\mu$-PL condition). We have mentioned the results under the PL condition in the abstract and introduction part in the revision.\n\n---\n\nQ3: It seems that we should use a small value of $B_2$. Is there any benefit of using a large $B_2$?\n\nA3: It is true that smaller $B_2$ is better for sample complexities. However, There is **benefit** of using large $B_2$ in terms of iteration complexity. The larger $B_2$, the smaller the iteration complexity. Please check Theorem 1 and Theorem 2 for the iteration complexities. Hence, from the computational perspective, if $B_2$ samples can be processed in parallel (e.g., in GPU), there is a benefit of using large $B_2$. In our experiments, we use $B_2=128$. ", " This paper introduces a novel stochastic method named Multi-block-Single-probe Variance Reduced (MSVR) estimator to track multiple functional mappings across iterations. Based on MSVR, the authors develop several algorithms for solving the finite-sum coupled compositional optimization (FCCO) problem. Theoretical analysis shows that the proposed algorithms obtain optimal complexities across a spectrum of settings. Experiment results on multi-task deep AUC maximization to demonstrate the advantage of the proposed algorithms. Strengths:\n\n1. The FCCO problem has a broad spectrum of applications in machine learning and may have a high impact in this area. The proposed algorithms have a major improvement in complexities. \n\n2. The proposed MSVR estimator is novel and quite efficient. It successfully reduces the tracking error with only a constant number of stochastic accesses to functional mappings at each iteration. \n\n3. The sample complexities are much better than existing works, and optimal in many cases. The experimental results are also convincing.\n\nWeaknesses:\n1. Besides multi-task deep AUC maximization, more experiments on others problems could be added to evaluate the proposed algorithms.\n\n2. The MSVR estimator seems incremental with respect to STORM, although the parameter is set in a different way.\n 1. In Theorem 3, the authors obtain better sample complexities under the $\\mu$-PL condition. When considering the $\\mu$-PL condition, do we need the convexity condition?\n\n2. There seems an inconsistency in the writing. The authors introduce the concept of PL condition on Page 7, but use strong convexity in other places.\n\n2. The sample complexities in Table 1 increase with $B_2$. It seems that we should use a small value of $B_2$. Is there any benefit of using a large $B_2$? \n N/A", " The paper considers a scheme in coupled compositional optimization where oracles of inner functions are only available at $\\mathcal{O}(1)$ blocks of functions each time. The paper proposes a new estimator called MSVR for inner function values leveraging the idea of variance reduction techniques such as STORM and SVRG. The authors establish the convergence rates of proposed algorithms in the (non)-convex and PL setting. Experimental results on the multi-task deep AUC maximization demonstrate their effectiveness and superiority over SOX - the algorithm proposed previously for solving this problem.\n **Strength**: The paper is well-organized and well-written. They leverage the idea of the STORM estimator [1] to obtain better convergence rates compared to [2]. All the results are clearly stated and the proof (although I didn’t check carefully for convex and PL objectives) is sound and consistent with existing literature.\n\n**Weaknesses**: On the theoretical side, the considered problem and proposed algorithms in this paper can be viewed as variance-reduced block coordinate update methods in stochastic compositional optimation. I partially doubt the novelty of this paper though I did not find any related work in this regime. From the practical point of view, the paper needs more experimental results to support the effectiveness and superiority of the proposed algorithm; see comments below.\n\nReference:\n\n[1] A. Cutkosky and F. Orabona. Momentum-based variance reduction in non-convex SGD. In Advances in Neural Information Processing Systems 32, pages 15210–15219, 2019.\n\n[2] B. Wang and T. Yang. Finite-sum coupled compositional stochastic optimization: Theory and applications. ArXiv e-prints, arXiv:2202.12396, 2022.\n As stated above I have two **major** questions for this paper: \n\n1. For the Coupled Compositional Optimization (CCO) problem considered, one can treat it as a general compositional optimization problem where a unified outer function $f$ can be defined to operate on $[g_1, g_2, \\dots, g_m]$. Moreover, since the outer function considered $f$ is deterministic, one can think of this problem from the perspective of block coordinate updates. It would be much more clear if the authors can elaborate more on their contributions and distinguish them from other existing works on variance reduction and block coordinate updates.\n\n2. The experiments results presented fail to capture the differences between sample complexities of SOX ($\\mathcal{O}(\\epsilon^{-4})$) and MSVR ($\\mathcal{O}(\\epsilon^{-2})$) in the finite-sum non-convex setting. The paper should include more experimental results to demonstrate the dependency of $\\epsilon, n, B_1, B_2$ instead of simply presenting results when fixing $B_1, B_2$.\n\nIn addition, I have a few **minor** comments for this paper:\n\n1. Line 30: Eq. (2) is valid only if each $\\xi_{i}$ **uniformly** distributed over a finite support.\n2. Line 31: As demonstrated above, I don’t think these problems are different from classical stochastic compositional optimization problems.\n3. Table 1: Please indentify $B_1 = |\\mathcal{B}_1^t|$ \n4. Section 2: Please add related works on the variance-reduced block coordinate updates.\n5. Line 182: How to ensure that the linearized update in Eq. (6) can obtain an $\\mathbf{u}_t^i$ within the range of $g_i$? For example, if $g_i$ is nonnegative given its structure, the linearized update may obtain a negative $\\mathbf{u}_t^i$. Would this be a significant issue?\n6. Theorem 3: The authors should at least mention the stage-wise design and refer readers to Reference.\n7. Line 254: Please cite reference papers discussing the convergence rate under the PL condition.\n8. Section 5: It would be more convincing if the authors can further motivate their presented examples. I didn’t see any merit in probing only one task in CIFAR-10 and MNIST datasets.\n The theoretical and experimental limitations are clearly stated in the previous section.\n\nThe paper doesn’t have any potential negative societal impact due to the theoretical nature of the work.\n", " In this paper, the authors study the problem of finite-sum coupled compositional optimization (FCCO), in which the inner function may depend on the outer function. When designing stochastic algorithms for FCCO, the main challenge is how to keep track of multiple (inner) functions efficiently. To solve this, the authors propose a new stochastic method named Multi-block-Single-probe Variance Reduced (MSVR) estimator, which could track a sequence of multiple blocks of functions by only probing $O(1)$ blocks. Based on the MSVR estimator, they develop three stochastic algorithms for FCCO, and establish improved complexities for non-convex, convex and strongly convex objectives. Experiments on the multi-task deep AUC maximization show the effectiveness of the proposed methods. Strengths:\n1. In the design of MSVR, the authors propose a customized error correction term, which is novel and powerful. Moreover, based on the MSVR, the authors develop three stochastic algorithms and establish optimal sample complexities.\n2. The paper is generally well-written. In particular, the motivation in Section 3.2 is convincing.\n\n\nWeaknesses:\n1. In the experiments, the authors only compare with SOX. I think it is better to add more baselines.\n2. Experiments are conducted on only one network (ResNet-18), it is expected to provide more experiments with different models. Moreover, the parameter analysis and ablation study are required to show the effectiveness of the proposed method. 1. In line 55, the authors claim that “However, this simple change does not improve the complexities over that obtained by Wang and Yang [2022]”. Do you mean that with Eq.4 the complexity is the same complexities as Wang and Yang [2022]?\n\n2. I am confused with the last column of Table 1. Does it denote the ratio between $B_1$ and $B_2$? For BSGD and BSpiderBoost, there are two Big O notations, but for other methods, there is only one Big O notation. What is the exact meaning?\n\n3. After presenting the main theorems, the authors should discuss how to set the values of $B_1$ and $B_2$.\n Since this paper study an optimization problem, I think that there is no potential negative societal impact of this work.", " This paper considers a general problem: tracking multiple mappings with only O(1) mappings available. An important application investigated in the paper is the Finite-sum Coupled Compositional Optimization (FCCO) problem. By using the proposed Multi-block-Single-probe Variance Reduced (MSVR) estimator, the authors obtain improved complexities for the FCCO problem under different settings. The proposed methods also enjoy better performance in experiments. Overall, the paper is well-written and the contribution is solid.\nStrengths: \n1. The proposed algorithms have better complexities than the previous best-known method for the FCCO problem, which covers wide applications in ML. The authors investigate the complexities under different settings, i.e., non-convex/convex/strongly-convex/finite-sum. Their theoretical guarantees match the lower bound in terms of $\\epsilon$ and $\\mu$ in many settings.\n2. The authors clearly show the intuition of the proposed method and how to determine the value of the parameter $\\gamma_t$. The proposed MSVR estimator is of interest and may inspire the algorithm design for other ML problems.\n3. The writing and explanations are clear. The algorithm is simple and the theoretical analysis looks sound to me.\n\nWeakness:\n1. The proofs in the supplementary are not very clear-written and more explanations about each lemma are recommended.\n 1. Could the analysis also be used in contrastive learning, whose loss function can also be written in the form of FCCO? \n2. For the inequality in Lemma1 on page 5 line 174, should it be $\\beta_{t+1}$ rather than $\\beta_{t}$ ?\n No problems here." ]
[ -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3, 5 ]
[ "8IrmRA4hzWs", "0mbPb2XjRaO", "Msems-vxL5x", "_c-0v9zaNwY", "6QICfl0PKE9", "cG-6bcvTPXj", "nips_2022_16nVkS8Twxo", "nips_2022_16nVkS8Twxo", "nips_2022_16nVkS8Twxo", "nips_2022_16nVkS8Twxo" ]
nips_2022_tuC6teLFZD
Synergy-of-Experts: Collaborate to Improve Adversarial Robustness
Learning adversarially robust models require invariant predictions to a small neighborhood of its natural inputs, often encountering insufficient model capacity. There is research showing that learning multiple sub-models in an ensemble could mitigate this insufficiency, further improving the generalization and the robustness. However, the ensemble's voting-based strategy excludes the possibility that the true predictions remain with the minority. Therefore, this paper further improves the ensemble through a collaboration scheme---Synergy-of-Experts (SoE). Compared with the voting-based strategy, the SoE enables the possibility of correct predictions even if there exists a single correct sub-model. In SoE, every sub-model fits its specific vulnerability area and reserves the rest of the sub-models to fit other vulnerability areas, which effectively optimizes the utilization of the model capacity. Empirical experiments verify that SoE outperforms various ensemble methods against white-box and transfer-based adversarial attacks.
Accept
This paper proposes an ensemble-type solution for improving the adversarial robustness of a model. The proposed idea is simple, yet novel with theoretical supports. The authors did a good job clarifying reviewers' concerns and all reviewers finally recommend acceptance. AC also thinks that this is a good paper in various aspects (novel idea, good write-up, solid theoretical supports) and has a potential to be a generic ensemble-type solution even in non-adversarial/standard setups, e.g., see [1]. Furthermore, the confidence prediction can be used for other purposes, e.g., out-of-distribution detection [2] and active learning [3]. It is useful for readers to discuss about these extensions. [1] Confident Multiple Choice Learning, Kimin Lee et al., ICML 2017. [2] Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers, Apoorv Vyas et al., ECCV 2018. [3] Learning Loss for Active Learning, Donggeun Yoo and In So Kweon, CVPR 2019.
val
[ "06fWOcaoqn2", "UHwoqXRg1sy", "B7FY2zmeTJu", "9M35tIpBksX", "z1OVmQa5JSm", "xd7qyoykPo5", "qk45WHwz5SR", "xLhZ10Pb3b", "88SZdQrcLFe3", "cSPqL_O6bkW", "ZtDaIE03drm", "eNhKGjl7XUC", "WvTWvShbAnCG", "xKEnJ3g_3o", "Zk_6DhGXNAD", "77jTnko3Kgx" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your clarification. I do not have other questions, and I would raise the score.", " Thanks for your reply! We would like to make every effort to clarify the unclear points.\n\n**Question 1:First, the generation of adversarial examples $\\tilde{x}''$ in Part 3 of your response is not discussed in the paper. I'm confused that whether this is what Algorithm 2 exactly describes, or just an optional setting in the implementation of Algorithm 2. I prefer the authors to clarify this setting.** \n\n**Answer:** As the reviewer mentioned, the generalization process of $\\tilde{x}''$ in Part 3 is what Algorithm 2 exactly describes. During adversarial training, we generate adversarial samples using the PGD attack iteratively. In Algorithm 2, we attack the collaboration iteratively to obtain the adversarial samples. However, given a benign sample $x$, when we update the adversarial sample in each iteration, the best-performing sub-model could be different. As we stated in Part 3, in the first iteration, given the input $x$, the best-performing sub-model is $f_{1}$. We obtain an adversarial sample $\\tilde{x}'$ by attacking $f_{1}$. However, in the second iteration, given the input $\\tilde{x}'$, the best-performing sub-model is another sub-model (e.g., $f_{2}$) rather than $f_{1}$. We attack the best-performing sub-model $f_{2}$ to obtain the adversarial sample $\\tilde{x}''$. Therefore, by attacking the collaboration following Algorithm 2, we could obtain the adversarial sample $\\tilde{x}''$ stated in Part 3.\n\nIn our original submission, we only stated that Algorithm 2 could generate the unseen adversarial samples. Following the advice, we have added this detail to our revised submission and highlighted it in blue.\n\n**Question 2: Second, it seems that Table 1 was obtained by using the PGD attack, instead of $\\ell_{1}^{adp}$ or $\\ell_{2}^{adp}$ in your response.**\n\n**Answer:** We would like to explain that in Table 1, the second to the last line **SoE** is the robustness results under the PGD attack, and the last line **SoE (adaptive)** is the robustness results by using the loss function $\\ell_{1}^{adp}$ or $\\ell_{2}^{adp}$.\n\n**Question 3: My question is, which submodel was used when computing the loss in the PGD attack? Did you use the loss function of the best-performing model or the sum of losses of all submodels? Similarly, in $l_{1}^{adp}$ and $l_{2}^{adp}$, it is unclear that the first term $\\ell(f_{\\theta}(x), y)$ was computed on which submodel.**\n\n**Answer:** We would like to explain it as follows.\n\n(1). maximizing the loss $\\ell(f_{\\theta}(x), y)$ could worse the predictor head, and maximizing the loss $l_{1}^{adp}$ ($l_{2}^{adp}$) could worse the evaluator.\n\n(2). we use a weighted loss $\\ell(f_{\\theta}(x), y) + \\lambda \\cdot l_{1}^{adp}$. By maximizing this weighted loss by the PGD attack, we could attack the predictor and the evaluator simultaneously.\n\n(3). in our original experiments under adaptive attacks, we attack the best-performing sub-model, which means we use the loss functions of the best-performing model rather than the sum of losses of all sub-models. Therefore, when computing the first term $\\ell(f_{\\theta}(x), y)$, we also use the loss of the best-performing sub-model to attack the predictor of the best-performing sub-model.\n\n(4). following the advice from Reviewer t6Y8, we also conduct experiments by attacking the predictor (evaluator) of the worse sub-models. Experiments under 9 different adaptive attacks validate that our method could outperform baselines on white-box attacks. More details could be found in **Response to Reviewer t6Y8**.\n\n**Question 4: Besides, I suggest the authors put some interesting results to the main text, e.g., the visualization in Appendix C.2 and the discussion about the number of submodels in Section 4.2 in Appendix. These results would better demonstrate the effectiveness of the proposed method.**\n\n**Answer:** We would like to thank the reviewer for the kind advice. Due to the page limit, we put these results in Appendix in our last submission. Following the advice, we tried our best to put these contents in the main text and highlighted them in blue. Meanwhile, due to the page limit, some details were left in Appendix. The revised submission has been uploaded.", " Thank you for your great efforts, and most of my concerns have been addressed. Some details are still not clear. \n\nFirst, the generation of adversarial examples $\\tilde{x}''$ in Part 3 of your response is not discussed in the paper. I'm confused that whether this is what Algorithm 2 exactly describes, or just an optional setting in the implementation of Algorithm 2. I prefer the authors to clarify this setting.\n\nSecond, it seems that Table 1 was obtained by using the PGD attack, instead of $\\ell_1^{adp}$ or $\\ell_2^{adp}$ in your response. My question is, which submodel was used when computing the loss in the PGD attack? Did you use the loss function of the best-performing model or the sum of losses of all submodels? Similarly, in $\\ell_1^{adp}$ and $\\ell_2^{adp}$, it is unclear that the first term $\\ell(f_{\\theta}(x),y)$ was computed on which submodel.\n\nBesides, I suggest the authors put some interesting results to the main text, *e.g.,* the visualization in Appendix C.2 and the discussion about the number of submodels in Section 4.2 in Appendix. These results would better demonstrate the effectiveness of the proposed method.", " Thanks for your detailed reply. My concerns have been addressed and I would raise the score.", " Dear reviewer UDEK:\n\nWe appreciate your efforts in reviewing our paper. Would you mind checking our response, and is there any unclear point so that we could further clarify?\n\nBest regards,\n\nAuthors", " Thanks for the reviewer's reply!\n\n* ****Question 1: I am also curious about attacking the predictor and evaluator of both worse sub-models and the best-performing sub-model.****\n\n **Answer:** Thanks for your question. We would like to explain it as follows.\n\n **7. attacking the predictor of both the best-performing sub-model and the worse sub-model.** \n\n | $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07|\n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n | robustness | 73.2 | 61.4 | 52.3 | 43.5 | 34.0 | 27.6 | 25.5 |\n\n **8. attacking the evaluator of both the best-performing sub-model and the worse sub-model.** \n\n | $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07|\n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n | robustness | 83.3 | 63.7 | 63.0 | 54.3 | 51.1 | 46.9 | 45.6 |\n\n From the above tables, the robustness of our method under the attack of the predictor (or the evaluator) of both the best-performing sub-model and the worse sub-model is higher than the robustness when only attacking the best-performing sub-model, and is lower when only attacking the worse sub-model.\n\n\n **9. attacking the predictor and the evaluator of both the best-performing sub-model and the worse sub-model.** We also conduct experiments on attacking the dual heads of both the best-performing sub-model and the worse sub-model with an optimal $\\lambda$. The results are shown in the following table. From the table, the robustness is slightly lower compared with the results in point **7**\n\n | $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07|\n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n | robustness | 73.2 | 59.6 | 47.3 | 38.5 | 32.1 | 27.4 | 24.9 | \n\nFrom the tables above, our method still outperforms baselines under these adaptive attacks.\n\n* ****Question 2: Also, one of my questions is not answered: \"what's the adversarial sample employed in transfer adversarial attack? Is it adaptive or non-adaptive?\"****\n\n **Answer:** We are sorry that we missed this problem when we went all out to answer the question about the adaptive attacks. \n\n For the transfer adversarial attacks, we follow the setting in [1] as we stated in Lines 260-261 in our original submission. In particular, we use surrogate models to generate adversarial samples for all baselines. Since the surrogate model has only one predictor head, we can only attack the predictor head to generate transfer adversarial samples as [1] did. For each sample, we use various attack methods to generate adversarial variants. Only when the model can classify all kinds of adversarial variants can the model successfully defend against adversarial attacks. More information could be found in Sec 4.3 in the main text of our original submission.\n\n* ****Question 3: Some minor typos:...\"****\n \n **Answer:** We would like to thank the reviewer for pointing out the typos in our paper. We have corrected these typos and checked our paper carefully. The revised version of our submission has been uploaded.\n\n[1]. Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, and Hai Li. Dverge: Diversifying vulnerabilities for enhanced robust generation of ensembles. In NeurIPS, 2020.", " Thanks for the author's reply. \n\nI am also curious about attacking the predictor and evaluator of both worse sub-models and the best-performing sub-model.\n\nAlso, one of my questions is not answered: \"what's the adversarial sample employed in transfer adversarial attack? Is it adaptive or non-adaptive?\"", " We would like to thank the reviewer for the very insightful and valuable comments. Below are our responses to the comments.\nTo the comments in ****Weaknesses:****\n* ****Questions: If the author can give a more systematic evaluation setting besides this, I would be more convincing.****\n\n****Answer:**** we acknowledge that a fair comparison under white-box attack is crucial. In our experiments, we found that the attacks on a worse sub-model could have a lower success rate. More analysis and discussions are as follows.\n\nFirstly, we evaluate the robustness of the two heads of the best-performing sub-models.\n\n**1. only attacking the predictor of the best-performing sub-models;**\n \n | $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | robustness | 72.0 | 57.5 | 47.8 | 38.7 | 30.4 | 24.3 | 24.0 |\n\n**2. only attacking the evaluator of the best-performing sub-models;**\n \n| $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | robustness | 71.2 | 63.2 | 51.2 | 47.6 | 40.4 | 36.1 | 35.3 |\n\nFrom the two tables above, attacking the predictor could achieve lower robustness. This could also be validated by the results shown in Figure 4 in the main text. From Figure 4, with the increase of $\\lambda$, the attacker focuses more on attacking the evaluator and our model has a higher robustness. Since we use a simple structure to implement the evaluator, attacking the evaluator could be harder than attacking the predictor to fool the collaboration.\n\nTo verify the robustness of the evaluator of the worse sub-models, we maximize the loss $\\ell = \\mathrm{BCE}(g_{\\phi}(x), 0)$ to increase $g_{\\phi}(x)$.\n\n**3. only attacking the evaluator of the worse sub-models to increase the confidence;**\n\n| $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | robustness | 81.9 | 76.4 | 71.5 | 68.3 | 65.9 | 60.0 | 54.6 |\n \nFrom the above table, the collaboration has higher robustness when attacking the evaluator of the worse sub-models. For a best-performing sub-model, it may be susceptible because of overfitting. However, for a sub-model with poor performance, it may have a smoother decision boundary and maintains a more robust performance.\n\nWe also conduct other adaptive methods which attack the predictor and the evaluator simultaneously using a weight loss with a proper $\\lambda$.\n \n**4. attacking both the evaluator and the predictor of the best-performing sub-model;**\n\n| $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | robustness | 70.9 | 56.2 | 45.6 | 38.7 | 28.9 | 22.1 | 21.8 |\n\n**5. attacking both the evaluator and the predictor of the worse sub-models;**\n\n| $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | robustness | 74.3 | 62.2 | 55.0 | 46.2 | 38.6 | 29.8 | 27.1 |\n\n**6. attacking both the evaluator of the worse sub-model and the predictor of the best-performing sub-model;**\n\n| $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | robustness | 71.4 | 56.8 | 50.8 | 38.9 | 28.5 | 25.4 | 23.2 |\n\nFrom the results above, our method still performs better than baselines under various adaptive attacks.", " To the comments in ****Questions:****\n\n* ****Question 7: Given that $g\\_{\\phi}$ and $f\\_{\\theta}$ are learned alternatively, how stable is the learning?.**** \n\n****Answer:**** Instead of learning $g\\_{\\phi}$ and $f\\_{\\theta}$ alternatively, we learn $f\\_{\\theta}$ and $g_{\\phi}$ simultaneously. From Figure 2(b) in the main text, $f\\_{\\theta}$ contains the parameters of the feature extractor and the predictor. $g\\_{\\phi}$ only contains the parameters of the evaluator. Optimizing $g\\_{\\phi}$ and $f\\_{\\theta}$ could update different parameters. Therefore, our method is stable as baselines.\n \n* ****Question 8: Which loss function is used in Table 1 and Figure 4? Why does the robustness increase when the weight $\\lambda$ increases? Why is the robustness higher when $\\epsilon = 0.07$ than $\\epsilon = 0.06$ ?**** \n\n****Answer:**** (1). we use two adaptive attacks with two different loss functions ($\\ell^{adp}\\_{1}$ and $\\ell^{adp}\\_{2}$). In table 1, we show the robustness under the strongest loss functions with an optimal $\\lambda$. In Figure 4, we also show the results of the loss function which achieves a stronger attack under different $\\lambda$.\n\n(2). In our experiments, we found that the evaluator is more difficult to be successfully attacked compared with the predictor due to its simple structure. With the increase of $\\lambda$, the attacker focuses more on fooling the evaluator rather than the predictor and this leads to a lower attack success rate. \n\n(3). the robustness is indeed lower when $\\epsilon = 0.07$ than $\\epsilon = 0.06$. All methods achieve a similar robustness when $\\epsilon = 0.06$ and $\\epsilon = 0.07$. From Figure (4), the robustness is lower when $\\epsilon = 0.07$ (24.0%) than $\\epsilon = 0.06$ (24.3%) if we only attack the predictor ($\\lambda = 0.0$). Besides, if we select an optimal $\\lambda$, the robustness is still slightly lower when $\\epsilon = 0.07$ (21.8%) than $\\epsilon = 0.06$ (22.1%).\n\n* ****Question 9: Have you measured the robustness when the ensemble contained different numbers of sub-models? When using SoE, can you use fewer sub-models to achieve higher robustness than previous voting-based methods?****\n\n****Answer:**** we did analyze the affect of the number of sub-models on the robustness in Sec 4.2 in Appendix in our original submission. In summary, we have the following findings: \n\n(1). ****different $\\epsilon$:**** multiple sub-models could achieve a higher robustness improvement given a relatively large $\\epsilon$.\n\n(2). ****different number of sub-models:**** more sub-models are more likely to achieve higher robustness, but the margin gain decreases with more sub-models. \n\nFor the comparison with voting-based methods when using fewer sub-models, we copy the results under transfer attacks in which we use 2 sub-models of SoE.\n\n | $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | GAL (3 sub-models) | 64.2$_{\\pm 4.2}$ | 48.7$_{\\pm 2.7}$ | 50.2$_{\\pm 3.5}$ | 49.9$_{\\pm 3.2}$ | 52.3$_{\\pm 4.5}$ | 48.7$_{\\pm 3.2}$ | 42.2$_{\\pm 4.1}$ |\n | ADP (3 sub-models) | 85.6$_{\\pm.2}$ | 82.9$_{\\pm .2}$ | 78.3$_{\\pm .3}$ | 73.2$_{\\pm .1}$ | 69.6$_{\\pm .2}$ | 60.4$_{\\pm .2}$ | 57.4$_{\\pm .1}$ |\n | DVERGE (3 sub-models) | 83.4$_{\\pm .3}$ | 80.1$_{\\pm .2}$ | 77.3$_{\\pm .1}$ | 72.4$_{\\pm .1}$ | 71.9$_{\\pm .2}$ | 68.8$_{\\pm .3}$ | 66.2$_{\\pm .2}$ |\n | MoRE (3 sub-models) | 84.8$_{\\pm .3}$ | 82.1$_{\\pm .1}$ | 78.4$_{\\pm .2}$ | 74.3$_{\\pm .1}$ | 73.2$_{\\pm .1}$ | 70.3$_{\\pm .2}$ | 69.1$_{\\pm .3}$ |\n | SoE (2 sub-models) | 85.0$_{\\pm .1}$ | 82.5$_{\\pm .2}$ | 78.0$_{\\pm .1}$ | 74.2$_{\\pm .1}$ | 73.0$_{\\pm .1}$ |70.5$_{\\pm .2}$ |64.2$_{\\pm .2}$ |\n\nFrom the above table, SoE with two sub-models achieves similar robustness compared with voting-based methods with three sub-models, which means a more efficient utilization of the sub-models of our method. More discussion could be found in Sec 4.2 in Appendix in our original submission.", " * ****Question 5: Algorithm 2 is somewhat confusing....**** \n\n****Answer:**** we use Alg. 1 and Alg. 2 in sequence to train our collaboration. And Alg. 2 provides unseen adversarial samples. The reasons are as follows.\n\nIn Alg 1, the generated samples $\\tilde{x}$ are the ****most**** adversarial samples of sub-models. The samples generated by one sub-models are fit by other sub-models with the best performance. Therefore, after the training converges, there are adversarial samples that could be ****not the most**** adversarial samples of any sub-model, but could fool all sub-models.\n \nAlg 2 is proposed to obtain such adversarial samples. In Alg 2, $\\tilde{x}''$ are generated to worsen $\\hat{p}$, which is the output of the sub-model with the highest confidence. However, during the generalization of $\\tilde{x}''$, given the input $x$, we could firstly obtain an adversarial sample $\\tilde{x}'$ which could fool the best-performing model. Then we continue to generate $\\tilde{x}''$ to fool another sub-model with the best performance on $\\tilde{x}'$.\n \nWe did ablation studies to validate the effectiveness of Alg. 2 in our original submission. We copy the results for your easy reference. \n\n | $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | SoE (without Alg. 2) | 85.0 | 82.3 | 78.5 | 75.0 | 65.0 | 50.0 | 30.0 |\n | SoE | 85.2 | 83.4 | 78.8 | 76.6 | 74.6 | 72.3 | 70.2 |\n\nFrom the above table, without Alg. 2, the robustness of SoE decreases dramatically when $\\epsilon > 0.04$. More discussion could be found in Sec. 4.4 in our original submission.\n\n* ****Question 6: The attacking methods used in experiments are not sufficient. First, the proposed method should be at least compared with the PGD/C&W attack on the best-performing sub-model. Second, the attack that simultaneously destroys the estimator output and the predictor head should be considered. Third, the adaptive attack used in the paper is confusing. There should be more discussions about $l_{2}$ and the selection of $j$. Besides, in $l_{2}$, $y$ is usually not known by the attacker, so it is unclear how to compute $l_{2}$.**** \n\n ****Answer:**** We would like to explain the three points as follows. \n\n ****(1).**** in our original submission, we did C&W attack in transfer attack experiments and black-box attacks in Sec 4.1 in Appendix. In our white-box attack experiments, we only used PGD attacks. Following the advice, we conduct experiments under white-box attacks using C&W attack and the results are shown in the following table. \n\n | $\\epsilon$ (robust/clean) | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | GAL | 41.5/87.8 | 36.4/85.4 | 21.381.2 | 22.9/78.7 | 16.0/77.3 | 9.9/76.2 | 6.4/76.0 |\n | DVERGE | 67.1/85.4 | 51.2/83.0 | 39.3/79.7 | 29.5/77.6 | 21.6/76.7 | 14.9/75.8 | 9.3/75.3 |\n | ADP | 66.9/89.0 | 51.5/86.8 | 38.9/85.4 | 28.9/83.3 | 21.9/76.0 | 20.9/66.4 | 15.2/63.0 |\n | MoRE | 65.3/88.0 | 50.7/85.3 | 37.4/82.0 | 30.8/79.5 | 22.9/78.2 | 15.6/77.1 | 11.2/77.8 |\n | SoE | **70.4**/88.8 | **55.4**/85.6 | **43.4**/80.2 | **33.6**/80.0 | **25.2**/79.1 | **19.6**/76.7 | **15.6**/74.1 |\n \nMoreover, we conduct auto-attack and the results are shown in the following table. \n\n | $\\epsilon$ (robust/clean) | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | GAL | 39.0/87.8 | 34.1/85.4 | 18.5/81.2 | 20.2/78.7 | 12.2/77.3 | 7.2/76.2 | 4.7/76.0 |\n | DVERGE | ****69.0****/85.4 | 52.3/83.0 | 36.9/79.7 | 28.7/77.6 | 19.7/76.7 | 10.8/75.8 | 7.2/75.3 |\n | ADP | 66.6/89.0 | 51.6/86.8 | 39.6/85.4 | 27.7/83.3 | 17.6/76.0 | 11.5/66.4 | 7.3/63.0 |\n | MoRE | 64.1/88.0 | 49.9/85.3 | 35.9/82.0 | 28.0/79.5 | 19.3/78.2 | 11.9/77.1 | 7.0/77.8 |\n | SoE | 68.4/88.8 | ****53.4****/85.6 | ****42.3****/80.2 | ****32.5****/80.0 | ****22.9****/79.1 | ****13.0****/76.7 | ****7.5****/74.1|\n \nFrom the above two tables, our method still outperforms baselines under C&W attack and auto-attack.\n\n****(2).**** In our original submission, we did two different adaptive attacks to attack the predictor and the estimator by maximizing a weighted loss. We show the results under the strongest adaptive attacks in Table 1. The experiments show that adaptive attacks slightly degrade the robustness of our method but our method still outperforms baselines.\n\n****(3). adaptive attack is white-box attack. The attacker should know the label $y$ under white-box attack.**** From the formulation of $\\ell\\_{2}$, the gradient of $f_{\\theta}(x)\\_{y}$ is always negative, while the gradient of $g\\_{\\phi}(x)$ is always positive. Maximizing $\\ell\\_{2}$ could decrease the predicted label probability ($f_{\\theta}(x)\\_{y}$) and increase the confidence ($g\\_{\\phi}(x)$). Therefore, maximizing $\\ell\\_{2}$ encourages a mismatch between the predicted probability $f_{\\theta}(x)\\_{y}$ and the confidence $g\\_{\\phi}(x)$. ", " * ****Question 3: The utility of the proposed method in minimizing the vulnerability overlap of all sub-models is not directly proven or verified. The main purpose of the proposed SoE is to minimize the vulnerability overlap of all sub-models. However, its effectiveness is not directly demonstrated. I suggest the authors provide some visual demonstrations or quantitative results to compare the vulnerability lap between SoE and other methods.****\n\n****Answer:**** We thank the reviewer for the advice on directly verifying our proposed collaboration scheme. We would like to explain it from the following aspects:\n\n1. ****visual demonstration:**** following the suggestion, we provide a visualization of the vulnerability area of the collaboration and the ensemble. We add it in Sec. C.2 in Appendix and highlight it in blue. Compared with the ensemble which requires the sub-models to fit the same adversarial samples to defend against adversarial attacks, the collaboration encourages more diverse vulnerability areas of sub-models to minimize the vulnerability overlap. From Figure 7 in Appendix, our collaboration fixes a broader vulnerability area than the ensemble. More details could be found in Sec. C.2 in our rebuttal submission;\n\n2. ****quantitative results:**** The black-box attack is a possible method to quantify the vulnerability overlap. In our original submission, we did black-box attack experiments to validate the effectiveness of the collaboration scheme. The results in the following table are copied from the original submission. \n\nblack-box attack experiments\n\n | $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | GAL | 47.0$_{\\pm 2.3}$ | 43.8$_{\\pm1.8}$ | 26.4$_{\\pm 3.2}$ | 27.7$_{\\pm 2.1}$ | 18.5$_{\\pm 1.5}$ | 13.1$_{\\pm 1.2}$ | 8.40$_{\\pm 2.4}$|\n | DVERGE | 72.0$_{\\pm 1.2}$ | 58.8$_{\\pm 1.1}$ | 47.8$_{\\pm 1.0}$ | 37.9$_{\\pm 1.1}$ | 28.4$_{\\pm 1.0}$ | 21.0$_{\\pm 1.2}$ | 15.0$_{\\pm 1.1}$|\n | ADP | 72.5$_{\\pm 1.0}$ |60.3$_{\\pm 1.1}$ | 47.2$_{\\pm 1.3}$ | 37.9$_{\\pm 1.4}$ | 28.0$_{\\pm 1.3}$ | 25.5$_{\\pm 1.0}$ | ****21.3****$_{\\pm 1.2}$ |\n | MoRE | 72.8$_{\\pm .8}$ | 59.6$_{\\pm .8}$ | 46.4$_{\\pm 1.1}$ | 37.8$_{\\pm 1.2}$ | 30.1$_{\\pm 1.3}$ | 22.0$_{\\pm 1.7}$ | 16.6$_{\\pm 1.9}$ |\n | SoE | ****77.7****$_{\\pm .5}$ | ****65.1****$_{\\pm 1.0}$ | ****54.7****$_{\\pm 1.0}$ | ****45.3****$_{\\pm 1.0}$ | ****36.3****$_{\\pm 1.3}$ | ****29.2****$_{\\pm 1.3}$ | 18.0$_{\\pm 1.5}$|\n\nFrom the table, our method achieves better robustness compared with the ensemble methods, and this verifies that our collaboration scheme could achieve a smaller vulnerability overlap.\n \n* ****Question 4: In Algorithm 1, a surrogate loss is used to approximate the loss on the best-performing sub-model. Why not directly use the loss of the sub-model with the largest confidence $g\\_{\\phi}(x)$?****\n\n****Answer:**** Thanks for the question. We would like to explain it from the following aspects:\n\n(1). our collaboration aims to fit the adversarial samples using the true best-performing sub-models. However, at the beginning of the model training, $g\\_{\\phi}(x)$ may not effectively identify the best-performing sub-model. Therefore, we do not directly use the loss of the sub-model with the largest confidence $g\\_{\\phi}(x)$;\n\n(2). directly optimizing the objective of the sub-model with the highest confidence may cause a potential trivial case. For example, if we only optimize the sub-model with the highest confidence, there may be only one sub-model that is properly trained. Therefore, we optimize a surrogate loss to minimize the objectives of all sub-models. ", " \nWe would like to thank the reviewer for appreciating our novelty and the very valuable comments. Below are our response to the comments.\n\nTo the comments in ****Weaknesses:****\n* ****Question 1: The formulation of the evaluator head in the SoE is questionable. In SoE, the evaluator head is designed to estimate the predicted probability $\\hat{p}\\_{y}(x)$ through a BCE loss. However, if $g\\_{\\phi}$ has successfully fitted the probability $\\hat{p}\\_{y}(x)$ then we can directly identify which dimension in $\\hat{p}\\_{y}(x)$ is the same as $g\\_{\\phi}$ thereby obtaining the ground-truth label. Therefore, the ensemble seems useless.**** \n\n ****Answer:**** we would like to clarify it and the reasons are as follows: \n\n (1). ****it is almost impossible to learn such a perfect evaluator;**** the performance of the evaluator is largely affected by the feature extractor. However, in adversarial training, the feature extractor is hard to fit all adversarial samples when there is a relatively large $\\epsilon$. Therefore, the evaluator can be hardly perfect;\n\n (2). ****given an effective evaluator, we still may not directly identify the ground-truth label.**** Since a deep model usually confidently predicts the label is a specific class, for example, $\\boldsymbol{\\hat{p}} = [0.91, 0.008,0.01,0.009,0.011,0.007,0.01,0.011,0.013,0.011]$. Suppose the predicted confidence is 0.01 ($g_{\\phi} = 0.01$), it is hard to identify the ground-truth label by identifying which dimension in $\\boldsymbol{\\hat{p}}$ is the same as $g_{\\phi}$.\n\n (3). ****the collaboration mechanism aims to minimize the vulnerability overlap during training, and the evaluator focuses on identifying the prediction with the highest confidence during inference.**** In our proposed framework, the collaboration mechanism means we train sub-models collaboratively so that it could achieve a smaller vulnerability area. During inference, our framework outputs the predictions with the highest confidence. Therefore, there is no collaboration between sub-models during inference as the reviewer mentioned.\n\n* ****Question 2: To learn the collaboration between sub-models, why not directly construct a one-hot label for the best-performing sub-model on each input, and use the cross-entropy to learn the probability of each sub-model being the best-performing model?**** \n\n****Answer:**** Thanks for your question. We would like to explain it from the following aspects:\n \n(1). ****the confidence is more informative than one-hot label.**** the confidence which refers to the predicted label probability is more informative than one-hot label, when evaluating the quality of the predictions. For example, suppose there are three sub-models, when the predicted label probability of the three sub-models are $\\hat{p}\\_{y}^{1} = 0.9$, $\\hat{p}\\_{y}^{2} = 0.1$, $\\hat{p}\\_{y}^{3} = 0.1$, the one-hot label should be [1, 0, 0]. However, if the predicted label probability of the three sub-models are $\\hat{p}\\_{y}^{1} = 0.9$, $\\hat{p}\\_{y}^{2} = 0.89$, $\\hat{p}\\_{y}^{3} = 0.1$, the one-hot labels will still be [1, 0, 0]. In this case, if we use one-hot label to learn models, it will be harder to find the suboptimal predictions ($\\hat{p}\\_{y}^{2} = 0.89$) once the best-performing sub-model is attacked.\n \n(2). ****learning one-hot label may be more difficult to optimize**** In fact, we have tried one-hot learning to identify the best-performing sub-model, but the experiments show that it has lower robustness. Suppose there are three sub-models, the one-hot label of the sub-model $f^{1}$ is determined by the predictions of other sub-models ($f^{2}$ and $f^{3}$). If other sub-models perform worse, the one-hot label of $f^{1}$ is 1, otherwise it is 0. In this case, the one-hot label predictor uses all predictions as input, e.g., $g_{\\phi}(f^{1}(x), f^{2}(x), f^{3}(x))$. We guess the reason behind the poor performance is that the evaluator may be too complex to optimize.", " We would like to thank the reviewer for the positive and very valuable comments. Below are our responses to the comments in ****Questions****.\n\n* ****Question 1: Please provide some empirical evidence that sub-model handles its corresponding adversarial subspace via collaboration scheme.**** \n\n ****Answer:**** we would like to explain it from the following two aspects: \n1. ****visualization:**** following the suggestion, we provide a visualization of adversarial samples with the decision boundaries of collaborated models. We add it in Sec. C.2 in Appendix and highlight it in blue. We compare the vulnerability of the collaboration and the ensemble scheme. From Figure 7 in Appendix, our collaboration achieves a smaller vulnerability area than the ensemble. More details could be found in Sec. C.2 in our rebuttal submission;\n2. ****quantitive validation**** The black-box attack is a possible method to quantify the vulnerability area. In our original submission, we did black-box attack experiments to validate the effectiveness of the collaboration scheme. The results are copied from the original submission. \n\n | $\\epsilon$ | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | GAL | 47.0$_{\\pm 2.3}$ | 43.8$_{\\pm1.8}$ | 26.4$_{\\pm 3.2}$ | 27.7$_{\\pm 2.1}$ | 18.5$_{\\pm 1.5}$ | 13.1$_{\\pm 1.2}$ | 8.40$_{\\pm 2.4}$|\n | DVERGE | 72.0$_{\\pm 1.2}$ | 58.8$_{\\pm 1.1}$ | 47.8$_{\\pm 1.0}$ | 37.9$_{\\pm 1.1}$ | 28.4$_{\\pm 1.0}$ | 21.0$_{\\pm 1.2}$ | 15.0$_{\\pm 1.1}$|\n | ADP | 72.5$_{\\pm 1.0}$ |60.3$_{\\pm 1.1}$ | 47.2$_{\\pm 1.3}$ | 37.9$_{\\pm 1.4}$ | 28.0$_{\\pm 1.3}$ | 25.5$_{\\pm 1.0}$ | ****21.3****$_{\\pm 1.2}$ |\n | MoRE | 72.8$_{\\pm .8}$ | 59.6$_{\\pm .8}$ | 46.4$_{\\pm 1.1}$ | 37.8$_{\\pm 1.2}$ | 30.1$_{\\pm 1.3}$ | 22.0$_{\\pm 1.7}$ | 16.6$_{\\pm 1.9}$ |\n | SoE | ****77.7****$_{\\pm .5}$ | ****65.1****$_{\\pm 1.0}$ | ****54.7****$_{\\pm 1.0}$ | ****45.3****$_{\\pm 1.0}$ | ****36.3****$_{\\pm 1.3}$ | ****29.2****$_{\\pm 1.3}$ | 18.0$_{\\pm 1.5}$|\n\nFrom the table, our method achieves better robustness compared with the ensemble methods, and this validates that our collaboration could fix broader vulnerability areas.\n \n* ****Question 2: Please include some stronger attacks for comparison, such as auto attack.**** \n\n****Answer:**** following the suggestion, we compare our method with baselines under auto-attack. The results are shown in the following table. \n\n | $\\epsilon$ (robust/clean) | 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07| \n | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n | GAL | 39.0/87.8 | 34.1/85.4 | 18.5/81.2 | 20.2/78.7 | 12.2/77.3 | 7.2/76.2 | 4.7/76.0 |\n | DVERGE | ****69.0****/85.4 | 52.3/83.0 | 36.9/79.7 | 28.7/77.6 | 19.7/76.7 | 10.8/75.8 | 7.2/75.3 |\n | ADP | 66.6/89.0 | 51.6/86.8 | 39.6/85.4 | 27.7/83.3 | 17.6/76.0 | 11.5/66.4 | 7.3/63.0 |\n | MoRE | 64.1/88.0 | 49.9/85.3 | 35.9/82.0 | 28.0/79.5 | 19.3/78.2 | 11.9/77.1 | 7.0/77.8 |\n | SoE | 68.4/88.8 | ****53.4****/85.6 | ****42.3****/80.2 | ****32.5****/80.0 | ****22.9****/79.1 | ****13.0****/76.7 | ****7.5****/74.1|\n\nFrom the table above, auto-attack is stronger than PGD and it achieves a higher attack success rate. Our method still outperforms baselines when $\\epsilon > 0.01$.", " This work introduces a model collaboration scheme to tackle the insufficient model capacity against adversarial examples instead of model ensemble. Through replacing voting-based strategy with selecting the best-performing sub-model, each sub-model only fits its specific adversarial areas, which enables the models with limited capacity to achieve better adversarial robustness. An auxiliary head which outputs the confidence is introduced to identify the best-performing sub-models. Experiments on CIFAR-10 and ResNet-20 demonstrate the effectiveness of proposed algorithm. Pros:\n\n* This work is well-organized with clear motivation and generally easy to read.\n* The authors provide the source code with instructions, which makes the algorithm easy to follow.\n* The designing of collaboration scheme seems natural and reasonable with its corresponding theoretical analysis.\n* The comparison with other defense baselines on CIFAR-10 demonstrate the superiority of proposed algorithm.\n\nCons:\n\n* The motivation of this paper is that each sub-model can handle its corresponding adversarial subspace via collaboration scheme so that the adversarial robustness with limited model capacity can be improved. However, it is difficult to the empirical evidence of it in experimental section. For example, some visualization of adversarial examples with decision boundaries of collaborated models are recommended.\n* The performance under different attack settings includes PGD and adaptive attacks is reported. However, some popular stronger attacks are not included for comparison, such as autoattack.\n Overall, I think this paper is interesting. I have several suggestions for the authors:\n\n1. Please provide some empirical evidence that sub-model handles its corresponding adversarial subspace via collaboration scheme.\n2. Please include some stronger attacks for comparison, such as autoattack.\n The authors mentioned that they have discussed the limitations in the Appendix.", " This paper proposes a method to improve the adversarial robustness of an ensemble. Instead of using the voting strategy, the proposed method aims to identify the model with the highest confidence on the ground-truth label. In this way, it can improve the model performance when the correct prediction remains with the minority. The authors conduct experiments on both wight-box and black-box attacks to demonstrate the effectiveness of their method. [Strengths]\n+ The authors rethink the classic voting-based strategy and note the problem with the case when correct predictions remain with the minority. Then, they propose a new “collaboration” strategy, which is very novel and interesting.\n+ This paper is well-structured and easy to follow.\n\n[Weaknesses]\n- The formulation of the evaluator head in the SoE is questionable. In SoE, the evaluator head is designed to estimate the predicted probability $\\hat p_y(x)$ through a BCE loss. However, if $g_{\\phi}$ has successfully fitted the probability $\\hat p_y(x)$, then we can directly identify which dimension in $\\hat p(x)$ is the same as $g_{\\phi}$, thereby obtaining the ground-truth label. Therefore, the ensemble seems useless.\n- To learn the collaboration between sub-models, why not directly construct a one-hot label for the best-performing sub-model on each input, and use the cross-entropy to learn the probability of each sub-model being the best-performing model?\n- The utility of the proposed method in minimizing the vulnerability overlap of all sub-models is not directly proven or verified. The main purpose of the proposed SoE is to minimize the vulnerability overlap of all sub-models. However, its effectiveness is not directly demonstrated. I suggest the authors provide some visual demonstrations or quantitative results to compare the vulnerability lap between SoE and other methods.\n- In Algorithm 1, a surrogate loss is used to approximate the loss on the best-performing sub-model. Why not directly use the loss of the sub-model with the largest confidence $g_{\\phi}(x)$?\n- Algorithm 2 is somewhat confusing. The adversarial example is generated to worsen $\\hat{p}(x)$, which is the output of the sub-model with the highest confidence. Therefore, this adversarial example is generated only based on the best-performing sub-model. Actually, such adversarial examples have been used in Algorithm 1, so it is confusing why we need Algorithm 2.\n- The attacking methods used in experiments are not sufficient. First, the proposed method should be at least compared with the PGD/C&W attack on the best-performing sub-model. Second, the attack that simultaneously destroys the estimator output and the predictor head should be considered. Third, the adaptive attack used in the paper is confusing. There should be more discussions about $l_2$ and the selection of $j$. Besides, in $l_2$, $y$ is usually not known by the attacker, so it is unclear how to compute $l_2$.\n - Given that $g_{\\phi}$ and $f_{\\theta}$ are learned alternatively, how stable is the learning?\n- Which loss function is used in Table 1 and Figure 4? Why does the robustness increase when the weight $\\lambda$ increases? Why is the robustness higher when $\\epsilon=0.07$ than $\\epsilon=0.06$?\n- Have you measured the robustness when the ensemble contained different numbers of sub-models? When using SoE, can you use fewer sub-models to achieve higher robustness than previous voting-based methods?\n The authors do not address the limitations of their work. ", " This paper proposes an adversarial defense method that can better utilize the ensemble of multiple experts. In contrast to previous ensembling methods that require more than half methods give the correct prediction, for each sample, the proposed method employs a router module that routes the sample to the best model for prediction. In this way, the proposed method can yield the correct result if the selected model can predict right. The proposed method demonstrates better performance compared to previous ensemble methods. Strengths:\n1. The writing is good and easy to follow\n2. There are some theoretical proofs\n\nWeaknesses:\nMy main concern lies in the fairness of evaluation. For white-box attacks, I don't think the non-adaptive SoE can be a fair comparison: all adversarial methods assume the visibility of the whole model, which should include the E head. However, in non-adaptive SoE, E head is not visible. \nThe author also gives a setting with an adaptive adversarial attack that can attack the E head, which is fairer than the previous setting. However, I don't think the adaptive setting is fair enough. For example, besides attacking the best model to be less confident, all worse models should also be encouraged to predict higher confidence. I think this adversarial setting can be more challenging given the E head of worse models can be easier to be fooled for this image (since its backbone is easier to be fooled). \nThis example is one evaluation method. If the author can give a more systematic evaluation setting besides this, I would be more convincing.\n\nAlso, what's the adversarial sample employed in transfer adversarial attack? Is it adaptive or non-adaptive?\n\nSome minor typos:\n1. Algorithm 2, \"duel\" heads should be \"dual\" heads A more careful and convincing white attack method should be employed to evaluate the proposed method. No social negative impact is found." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "UHwoqXRg1sy", "B7FY2zmeTJu", "z1OVmQa5JSm", "xd7qyoykPo5", "Zk_6DhGXNAD", "qk45WHwz5SR", "xLhZ10Pb3b", "77jTnko3Kgx", "Zk_6DhGXNAD", "Zk_6DhGXNAD", "Zk_6DhGXNAD", "Zk_6DhGXNAD", "xKEnJ3g_3o", "nips_2022_tuC6teLFZD", "nips_2022_tuC6teLFZD", "nips_2022_tuC6teLFZD" ]
nips_2022_Q8GnGqT-GTJ
Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning
Prompt learning approaches have made waves in natural language processing by inducing better few-shot performance while they still follow a parametric-based learning paradigm; the oblivion and rote memorization problems in learning may encounter unstable generalization issues. Specifically, vanilla prompt learning may struggle to utilize atypical instances by rote during fully-supervised training or overfit shallow patterns with low-shot data. To alleviate such limitations, we develop RetroPrompt with the motivation of decoupling knowledge from memorization to help the model strike a balance between generalization and memorization. In contrast with vanilla prompt learning, RetroPrompt constructs an open-book knowledge-store from training instances and implements a retrieval mechanism during the process of input, training and inference, thus equipping the model with the ability to retrieve related contexts from the training corpus as cues for enhancement. Extensive experiments demonstrate that RetroPrompt can obtain better performance in both few-shot and zero-shot settings. Besides, we further illustrate that our proposed RetroPrompt can yield better generalization abilities with new datasets. Detailed analysis of memorization indeed reveals RetroPrompt can reduce the reliance of language models on memorization; thus, improving generalization for downstream tasks. Code is available in https://github.com/zjunlp/PromptKG/tree/main/research/RetroPrompt.
Accept
The paper proposes RetroPrompt, which builds a knowledge-store with training examples and improves few-shot and zero-shot performance. All reviewers appreciate the improvements over competitive baselines and the quality of presentation. The main weaknesses are the lack of ablations to clarify where the gains come from and unclear positioning with respect to previous works (KNN-LM, RETRO, REALM, RAG). The reviewers were unanimous in accepting and the authors addressed some of the issues raised.
test
[ "hyCgs-J09YJ", "GuU_U7r5OXw", "vwo0rOEq-dY", "ykRpRpd30qW", "edIY11fmtTL", "WMCFr1-n8H1", "qxJTDdP4AJf", "AiARcUhgMfu", "6YYq5k8rCDs", "FCCkOsDkfcH", "doQ8GeJEXHV", "dVX_lBc8sw5", "vQC_sYC9NYj", "vR142V3oUUm" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nWe hope that you've had a chance to read our response. We would really appreciate a reply as to whether our response and clarifications have addressed the issues raised in your review, or whether there is anything else we can address.", " Dear reviewers, we sincerely appreciate any suggestions and welcome any more questions about our response.\n\nAll Authors.", " Dear Reviewer puCA:\n\nThank you again for your constructive comments and kindly response. We will carefully revise our paper. We have **compared RetroPrompt with several retrieval-related approaches in the above \"Summary of Revisions\"** for further understanding. ", " Thank you for your response. Some of my concerns have been addressed, though I am not fully convinced about the response to weakness 1. In any case, I have increased my score. ", " Dear reviewers and AC,\n\nWe sincerely appreciate your valuable time and constructive comments.\n\nWe’ve uploaded a revised draft incorporating reviewer feedback. Modified text is shown in blue font. Below is a summary of the main changes:\n\n- Add the comparison with kNN-LM in Sections 3.1 and 3.4.\n- Section 3.2 states that the concatenation of neural demonstration and input instance happens on the dimension of tokens.\n- Section 3.3 states that the connection between $P_{k\\text{NN}}$ and the $p_{k\\text{NN}}$ .\n- Add a footnote noting that this is not a strict zero-shot sense in Section 4.2.\n- Add a comprehensive discussion of limitations in Appendix D, including the analysis of memory usage.\n- Add more results of full-data prompt-tuning in Appendix F.\n\nWe briefly introduce the motivation, method, contribution, and comparison with several retrieval-related approaches as follows:\n\nMotivation: decouple the knowledge from memorization by constructing an open-book knowledge-store from the training data; thus, referring to related knowledge could provide a strong enhancement signal to help the model strike a balance between generalization and memorization.\n\nMethod:\n\n- Construct knowledge-store with regarding prompt-based instance representation as keys and label words as values, which is an initial exploration for prompt learning. \n- Retrieve related examples for each class as the demonstration, which can help the LM to learn by analogy and alleviate rote memorization. \n- Leverage the probability of kNN distribution as the prior external knowledge to guide the PLMs’ parameters attending to hard examples during the training process, which can calibrate the memory of a language model.\n- Incorporate kNN’s classification results into the final prediction at test time, which empowers the model to open book exams.\n\nContribution:\n\n- We propose to decouple knowledge from knowledge to balance memorization and generalization.\n- The first retrieval-enhanced prompt-tuning approach that brings promising improvements to a wide range of NLU tasks in few-shot, zero-shot and cross-domain settings.\n- The comprehensive exploration of leveraging the self-influence based memorization scoring function to analyze the memorization process between fine-tuning, prompt learning and our RetroPrompt. \n- Code and datasets are in the supplementary material and will be released for reproducibility.\n\n**Comparison of RetroPrompt with several retrieval-related approaches**:\n\n| Model | Retrieval Set | Main Tasks | Integration Process | Retrieved Content |\n| :-----------: | :----------------: | :----------------: | :------------------------: | :-----------------------------------------: |\n| RETRO | external knowledge | language modeling | pre-training | chunk |\n| REALM | external knowledge | open-domain QA | pre-training | passage |\n| kNN-LM | internal corpus | language modeling | only inference | (K,V)=(embedding, token) |\n| LM-BFF(+demo) | internal corpus | Prompt for NLU | input | passage |\n| KPT | external knowledge | Prompt for NLU | verbalizer | related words of the label |\n| RetroPrompt | internal corpus | Prompt for NLU | input, prompt-tuning, inference | (K,V)=(prompt-based embedding, label words) |\n\nWe hope our responses and revisions address all reviewers’ concerns.\n\nWe sincerely believe that these updates may help us better deliver the benefits of the proposed RetroPrompt to the NeurIPS community.\n\nThank you very much,\n\nAuthors.", " Thank you for the detailed and constructive comments.\n\n**R3:** \n\n**For Q1:** 4-shot and 16-shot setup refer to 4 and 16 examples for each class. For example, the datastore of SST-2 datatset consists of 8 examples in the 4-shot setting. Our work aims to retrieve from the datastore to decouple knowledge from memorization. Despite only 4 examples in each class, the model benefits from the kNN approach as described below:\n\n(a) we can still retrieve related examples for each class as the demonstration, which can help the LM to learn by analogy and alleviate rote memorization. \n\n(b) we can still achieve the probability of kNN distribution as the prior external knowledge to guide the PLMs’ parameters attending to hard examples during the training process, which can calibrate the memory of a language model.\n\n(c) we can still incorporate kNN’s classification results into the final prediction at test time, which empowers the model to open book exams.\n\n**For Q2:** That’s a great question; there is no training phrase for OURS and baselines (except LOTClass) in a zero-shot setup. The difference from kNN-LM is not only the use of neural demonstration but also the construction of the datastore. The gains in the zero-shot setting come from the neural demonstrations, prompt-based construction of the datastore and interpolating the kNN’s results at inference time.\n\n**For Q3:** \n\n- **w/o kNN-test:** in this case, the datastore is also used for retrieving neural demo for demonstration learning and nearest neighbor for kNN guided training. The gains mostly come from better trained LM.\n- **w/o kNN-train:** in this case, this is similar to kNN-LM except there are neural demonstrations from datastore (our datastore is indeed different and can be asynchronously refreshed). This setting is also similar to the main model in a zero-shot setup. The difference is that the model in a zero-shot setup doesn't involve training but “w/o kNN-train” involves training with few-shot data.\n\n**For Q4:** Sorry for the unclear parts. Capital $P_{k\\text{NN}}$ is the probability mass obtained from kNN, the lower-case $p_{k\\text{NN}}$ is the probability value of the gold class in the $P_{k\\text{NN}}$ . The value of $p_{k\\text{NN}}$ can reflect the difficulty of the instance to some extent, so we use it to re-weight the cross-entropy loss. We have stated this part more clearly in the revised draft. \n\n**For Q5:** Thank you for your excellent suggestion. We have added a footnote noting that this is not a strict zero-shot sense in the revised draft. Here we follow KPT that retrieves related knowledge without tuning parameters for zero-shot settings to compare fairly. The difference is that KPT retrieves from Related Words([https://relatedwords.org](https://relatedwords.org/)) to empower the verbalizer while we retrieve from unlabeled corpora to take advantage of our retrieval mechanism to improve the generalization of prompt learning. ", " Thank you for the detailed and constructive comments.\n\n**R1:** As far as I know, the practice of combining softmax probability and probability from kNN was not invented by kNN-LM. It originated from Grave et al.(2017a)[1], then kNN-LM[2] followed Grave et al.(2017a) to produce the final kNN-LM distribution, Khandelwal et al. 2021 [3] further explore methods to improve its efficiency along various dimensions for kNN-LM. Our idea of interpolating the nearest neighbor distribution P<sub>kNN</sub> with the model distribution P<sub>LM</sub> using a tuned parameter λ **at test time** is indeed motivated by [1,2,3]. Our paper's overall motivation is to decouple knowledge from memorization with the comprehensive retrieval mechanism for prompt learning rather than the simple interpolation at test stages. We discuss the differences as follows:\n\n(a) Unlike kNN-LM, which solves language modeling tasks with generative models, we mainly focus on NLU tasks with prompt learning. \n\n(b) kNN-LM constructs datastore with sliding generative corpus and tokens, while we explicitly construct knowledge-store with regarding prompt-based instance representation as keys and label words as values, which is an initial exploration for prompt learning. \n\n(c) kNN-LM only incorporates the interpolation of the kNN distribution in the test phase, which offers a little gain in prompt learning (refer to the results of “w/o kNN-test” in Table 4). Therefore, we design the module of “kNN-train ” and “Neural Demonstration” to improve generalization ability during the input and training stages. \n\nThanks a lot for your constructive suggestions and sorry for the incomplete discussion of the difference between ours and kNN-LM. We have added more comparisons and citations in Sections 3.1 and 3.4 of the revised draft.\n\n[1] Unbounded cache model for online language modeling with open vocabulary. NIPS 2017. \n[2] Generalization through Memorization: Nearest Neighbor Language Models. ICLR 2020. \n[3] Efficient nearest neighbor language models. EMNLP 2021.\n\n**R2:** Sorry for the missing reference (RETRO Borgeaud et al. 2021) and we have added it in Section 5 of our paper. We just name our model RetroPrompt because the generalization performance of prompt learning is improved with the retrieval mechanism across a comprehensive manner.\n", " Thank you for the detailed and constructive comments.\n\n**R1:** \n\n1). We reproduce the public codes of KPT for experiments on the datasets we chosen and select the same templates as ours. We also follow their paper to adopt Related Words ([https://relatedwords.org](https://relatedwords.org/)), a knowledge graph aggregated from multiple resources, including word embeddings, ConceptNet, WordNet, etc., as the external KB to expand the verbalizers. Here we absolutely did not weaken the compelling results. The reality is that the results of KPT are worse than LM-BFF (mainly for part datasets of GLUE) in the 16-shot setting but averagely better than LM-BFF in the 4-shot setting. Maybe KPT extends the label with the many additional related words and also introduces much noise. \n\n2). In our zero-shot setting, the pseudo-labeled training sets are available to both RetroPrompt and LM-BFF(+demo). Thus LM-BFF(+demo) retrieves from that with examples that are semantically close to the input instance as the discrete demonstration.\n\n**R2**: \n\n1). Sorry for the unclear parts. We do not use FocalLoss. We propose to leverage the kNN’s classification results as the prior external knowledge to guide the PLMs’ parameters attending to hard examples during the training process. FocalLoss is similar to our motivation for calibration training and we just introduce FocalLoss for better understanding. On the other hand, our motivation of calibration training is different from a linear combination of the original cross-entropy loss and a KNN loss at training time. Moreover, kNN is a non-parameterized classifier, which is non-derivable. Thus, we cannot compare with that. We have stated this part more clearly in the revised draft.\n\n2). The primitive demonstration is concatenated with the input example at the word embedding layer. Here we follow the previous practice to set neural demonstration at the embedding layer to participate in the information exchange of the current input instance. \n\n**R3:**\n\n- **For Q1:** Yes, for neural demonstration, the concatenation of input instance and neural demonstration happens on the dimension of tokens. Thanks for your excellent suggestion; we have stated it more clearly in Section 3.2 of the revised draft.\n- **For Q2:** Yes, in section 4.4, we prompt-tune the LM with 16 examples from the source dataset and not prompt-tune on the target dataset at all.\n", " Thank you for the detailed and constructive comments.\n\n**R1:** Prompt learning has arisen to improve the few-shot learning of LMs significantly. Thus, we mainly focus on few-shot, zero-shot and cross-domain settings in the experiments. Due to text space limitations, we have complemented the performance of full-data prompt-tuning in Appendix F of the revised draft.\n\n**R2:** Actually, our method adopts FAISS tools for retrieval. FAISS is an excellent open-source library for fast nearest neighbor retrieval in high-dimensional spaces, which supports searching only from RAM, which involves k-means clustering for improving memory usage efficiency. Memory usage is negligible in the few-shot settings and acceptable in the full-data settings. Our retrieval process is performed mainly on the CPU, and we compare the utilization of the CPU with and without retrieval in the SST-2 full-data setting. \n\n- The CPU utilization was 46.2% with the retrieval process and 2.5% without it (Our CPU is Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz with 40 cores).\n- In terms of memory usage, adding retrieval requires about 2.5G more memory than not. One way to reduce resource usage is to store the datastore on the disk in advance, then read and release it in the retrieval process.", " Thank you for the detailed and constructive comments.\n\n**R1:** \n\n- The general idea of using retrieval modules on memories (Google’s REALM, and Meta’s RAG), has been proposed to retrieve from the **external** knowledge corpus (e.g., Wikipedia) with **pre-training** for a particular task (e.g., an open-domain question). While we develop RetroPrompt with the motivation of decoupling knowledge from memorization to help the model strike a balance between generalization and memorization (mainly for prompt-tuning) with retrieving examples from the **internal** training data, which is essentially different from the motivation of REALM and RAG to introduce external corpus for reasoning. In fact, it doesn't make sense to rigidly use REALM and RAG for pre-training based on internal training data. Thus, we can’t include REALM and RAG as baselines due to the different knowledge sources to retrieve.\n- Our method is more related to kNN-LM, which only uses **internal** corpus. KNN-LM and FiD both only use nearest neighbors for **the inference** **process. At the same time,** we aim to develop a comprehensive retrieval mechanism for **input, training (original design of “prompt-based datastore construction”,“kNN-train” and “Neural Demonstration”) and test process** to further improve **generalization** of prompt-tuning. Moreover, kNN-LM is proposed to solve language modeling tasks by retrieving sliding generative corpus at test time, while we mainly focus on prompt learning for NLU tasks. Thus, it is hard to compare with kNN-LM on the same tasks directly. But we will try to generalize our work to generative and knowledge-intensive tasks in the future.\n\nNote that we consider the comprehensive retrieval mechanism to improve the generalization of prompt learning rather than the simple combination of different models in different stages. We also adopt self-influence as our memorization\nscoring function to analyze the memorization process between fine-tuning, prompt learning and our RETROPROMPT. The final analysis results show that 1) the training instances with the highest memorization scores tend to be atypical, 2) RETROPROMPT generalize better than fine-tuning and convention prompt-tuning with decoupling knowledge from memorization to alleviate the rote of PLMs.\n\n**R2:** As discussed in R1, our motivation is essentially different from retrieval-augmented generation for knowledge-intensive tasks, such as complicated reasoning. We focus on helping the model balance generalization and memorization without additional external knowledge, mainly for prompt-based NLU tasks. The tasks chosen in this work also involve long-text tasks (such as information extraction). We will try to extend our work for knowledge-intensive tasks in the future. and thanks again for your insightful suggestions.\n\n**R3:** Our method mainly involves retrieving related examples of corresponding training data to further improve prompt-tuning generalization. As described in the paper (Line 165), we mainly adopt inner product similarity for retrieving nearest neighbors. Intuitively, RetroPrompt is orthogonal to previous IR-based approaches (such as DPR, BM25), which are aimed at searching relevance score between docs. We have conducted experiments to compare our representation-based similarity score with BM25 based in Section 4.6. The results in Table 5 representation-based similarity scores for kNN lead to better performance. \n\n**R4:** Thanks for your suggestion; we have improved some informal expressions in the revised draft based on the constructive comments of all the reviewers.\n\n**R5:** We are so sorry that we don't include a separate chapter discussing limitations due to text space limitations. Despite this, we have previously discussed the efficiency overhead of the proposed method in Appendix D. Thanks a lot for your constructive suggestions; we have further complemented the comprehensive discussion of limitations in Appendix D of the revised draft.", " This paper introduces RetroPrompt, which constructs an open-book knowledge store from training instances and implements a retrieval mechanism during the process of input, training, and inference, thus equipping the model with the ability to retrieve related contexts from the training corpus as cues for enhancement.\n Strength:\n\n1. The experiments are extensive and convincing.\n2. The idea of using kNN as a grounding demonstration is technically sound and plausible.\n3. The choices of datasets and tasks are diverse.\n\nWeakness\n\n1. The idea of using nearest neighbors to train (kNN LM), to inference (FiD), and the general idea of using retrieval modules on memories (Google’s REALM, and Meta’s RAG), have been proposed and well studied. It’s unclear to me what is the novelty of this work. It seems like a combination of different models in different stages. And none of the models mentioned above are included as baselines.\n\n2. What’s the performance of your method on open-ended knowledge-intensive tasks. As the tasks chosen in this work are mainly short-text tasks that can be answered with closed-from short texts (mainly overlapped with LM-BFF). Other tasks such as MSMARCO and Wizard of Wkipedia normally require more complicated reasoning rather than simple keyword memorization.\n\n3. It is important to include some information retrieval systems as baselines, since if you integrate the retrieval system into every stage of LM, what’s the benefit of using an LM for unreliable prediction? I would like to see some analysis in the comparison with some IR-based methods, such as DPR, BM25, etc (+ necessary post-processing modules), if applicable.\n\n4. The writing can be improved, especially some informal expressions.\n\n5. No discussion of limitations See the comments in weakness. Especially points 2 and 3.\n Discussion of limitations is lacking.", " They propose a method for better zero-shot and few-shot prompt learning called RETROPROMPT. To decoupling knowledge from memorization, they introduce knowledge-store which contains key-value as a [masked] label representation and its corresponding label word for the target data set. Using kNN on this neural knowledge-store, guided training and open-book inference can be implemented. By achieving best performances on 9 NLU tasks, the proposed method is shown to be effective. [Strengths]\n- Well written.\n- The paper propose technically sound and smart idea for balancing between generalization and memorization.\n- Various experiements prove the effectiveness of the proposed RETROPROMPT for zero-/few-shot learning.\n\n[Weaknesses]\n- Experiments on full fine-tuning for all 9 datasets would be better for understanding the proposed work (only 3 results out of 9 datasets are included in the paper).\n- The proposed model may suffer from enormous memory usage, especially for the full fine-tuning. For the full fine-tuning, does the proposed model suffer from tremendous memory usage? - Knowledge-store might be managed for full fine-tuning on large training dataset\n- As metioned in the conclusion, this paper only deals with NLU tasks with encoder models. ", " This paper proposes an idea to retrieve from few-shot examples to prevent the prompt tuning process to bias toward learning from atypical examples. The combination of three techniques - knn-guided training, neural demonstration, and the knn-guided test-time prediction leads to a strong empirical performance across a broad set of tasks. Analysis shows that the proposed method effectively mitigates memorization. ### Strengths\n- The proposed method shows very strong empirical results compared to strong baselines\n- It shows that even in the few-shot setting, retrieval-based approaches would be helpful for stabilizing and improving results without introducing additional parameters, and it could lead to useful application in a broader context.\n\n### Weaknesses\n- Details are missing, especially about the baseline and it weakens the compelling results. 1) KPT is a method to expand the verbalizers, what verbalizers did you get? I don't understand why the results would be worse than LM-BFF as it seems to be LM-BFF + additional verbalizers? 2) For LM-BFF, how did you get the scores with demonstrations in a zero-shot setting? \n- Some design choices are not well justified or ablated: 1) Why specifically use FocalLoss? How does it compare to a linear combination of the original cross-entropy loss and a KNN loss similar to KNN-LM and it would be closer to the retrieval setup during test time? 2) Why does neural demonstration happen at the embedding layer? \n - For neural demonstration, I assume that the concatenation happens on the dimension of tokens? I suggest stating it more clearly in the paper.\n- For section 4.4, did you just use 16 examples from the source dataset and not fine-tune on the target dataset at all?\n Yes, the authors discuss about efficiency overhead of the proposed method in appendix.", " This paper proposes a new semi-parametric approach for few-shot learning, where the prediction is made based on the combination of softmax probability from the language model (LM) and probability from retrieval over a knowledge store (in this paper, training data, either unlabeled or labeled). The paper proposes a few model components to make this approach work – the LM trained to be a good retriever (along with asynchronous update of the index), neural demonstrations (using nearest neighbors of the query from the knowledge store), and a modified objective (called guiding training in the paper). Experiments are done in a few-shot setup (where a few labeled examples leads to a knowledge store) and in a zero-shot setup (where an unlabeled training data is a knowledge store, and there is no parameter updates), and show strong results on 9 different NLP datasets (classification and information extraction) over competitive zero- and few-shot baselines.\n ### Strengths\n\n* The idea of applying retrieval over a knowledge store for language model prompting is of interest in the community and is a timely topic.\n* This paper is one of the first that applies the kNN-LM approach for downstream tasks, as far as I know.\n* Empirical results are strong, outperforming a range of competitive baselines.\n* Experiments also include extensive ablations, showing the impact of each model component.\n\n\n### Weaknesses\n\n* The idea is strongly based on the kNN-LM model (Khandelwal et al. 2021). In fact, the overall idea of combining softmax probability and probability from retrieval is entirely taken from kNN-LM, and how the model works at inference time is identical with kNN-LM, if I understand correctly. Although this paper is cited, it is cited only briefly (only cited twice, in Section 3.4 and Section 5). It looks to me that the paper should extensively discuss kNN-LM and describe the differences. (To clarify, I do believe there are substantial differences. They are just not discussed in the paper.)\n* The model seems named after RETRO (Borgeaud et al. 2021), but the paper does not cite nor discuss the RETRO paper. In fact, I do not think the connection between this paper and RETRO is very tight – they’re only loosely related in the sense that both do retrieval, but what they retrieve, where they retrieve from, how they incorporate retrieval to LM, and what problems they are solving are all different. It is indeed not very intuitive to me why the model is named after RETRO, given that it is significantly more related to kNN-LM.\n* It is not entirely clear to me what exactly are giving improvements, e.g., how the model gains from retrieval when the data store is extremely small (e.g., with 4 examples), how the zero-shot model achieves significant gains even with no training (which I believe is the main contribution of the paper), how the model still outperforms most competitive baselines (in Table 1) even without kNN-test that does not actually incorporate retrieval at test time, etc. I discussed them in more detail in the “Questions/comments” section.\n\n(I am giving a borderline score due to the weaknesses mentioned here, but am happy to increase the score if they are resolved during the author rebuttal period.)\n\nBorgeaud et al. 2021: https://arxiv.org/abs/2112.04426 * kNN-based models are known to be effective when the size of the datastore is large. In this context, it is not intuitive to me how the model benefits from the kNN approach in a few-shot setup, where the datastore only consists of 4 or 16 examples.\n* Is my understanding correct that, in a zero-shot setup, since there is no training phrase, the only difference from kNN-LM in terms of the model is the use of neural demonstration? Based on Table 1, the proposed model has significantly more improvements in a zero-shot rather than a few-shot? Are all gains here coming from neural demonstrations?\n* Questions about Table 4:\n * w/o kNN-test: in this case, since the data store is not actually used, are gains mostly coming from better LM?\n * w/o kNN-train: in this case, is it identical to kNN-LM except there are neural demonstrations? (I know the knowledge store still looks different, but am curious in terms of modeling.) And is my understanding correct that this is identical to the main model in a zero-shot setup?\n* Equation (6): It is not entirely clear to me how this objective achieves the problem mentioned in the section, and if this objective is mathematically sound. (e.g. why does it have to be the form of the multiplication between two log probability values?)\n* Zero-shot setting: It seems debatable whether the use of training data (even if labels are not used) can be called zero-shot, although I understand authors followed the setup from previous work. Maybe it’s good to add a footnote noting that this is not zero-shot in a stricter sense? The paper does not include limitations of work. I suggest authors to indicate limitations of the proposed model (e.g., when the proposed model may not work) as well as broader risk in applying large LMs in downstream tasks." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "vR142V3oUUm", "nips_2022_Q8GnGqT-GTJ", "ykRpRpd30qW", "FCCkOsDkfcH", "nips_2022_Q8GnGqT-GTJ", "qxJTDdP4AJf", "vR142V3oUUm", "vQC_sYC9NYj", "dVX_lBc8sw5", "doQ8GeJEXHV", "nips_2022_Q8GnGqT-GTJ", "nips_2022_Q8GnGqT-GTJ", "nips_2022_Q8GnGqT-GTJ", "nips_2022_Q8GnGqT-GTJ" ]
nips_2022_Soadfc-JMeX
HSDF: Hybrid Sign and Distance Field for Modeling Surfaces with Arbitrary Topologies
Neural implicit function based on signed distance field (SDF) has achieved impressive progress in reconstructing 3D models with high fidelity. However, such approaches can only represent closed shapes. Recent works based on unsigned distance function (UDF) are proposed to handle both watertight and open surfaces. Nonetheless, as UDF is signless, its direct output is limited to point cloud, which imposes an additional challenge on extracting high-quality meshes from discrete points. To address this issue, we present a new learnable implicit representation, coded HSDF, that connects the good ends of SDF and UDF. In particular, HSDF is able to represent arbitrary topologies containing both closed and open surfaces while being compatible with existing iso-surface extraction techniques for easy field-to-mesh conversion. In addition to predicting a UDF, we propose to learn an additional sign field via a simple classifier. Unlike traditional SDF, HSDF is able to locate the surface of interest before level surface extraction by generating surface points following NDF~\cite{chibane2020ndf}. We are then able to obtain open surfaces via an adaptive meshing approach that only instantiates regions containing surface into a polygon mesh. We also propose HSDF-Net, a dedicated learning framework that factorizes the learning of HSDF into two easier problems. Experiments on multiple datasets show that HSDF outperforms state-of-the-art techniques both qualitatively and quantitatively.
Accept
The reviewers agree that the paper's idea to include both sign and distance fields is a valuable contribution to 3D computer vision research. Reviewers ask sensible clarifying questions (e.g. orienting the training data, sign network continuity) and the rebuttal's answers are illuminating and to the point. A short notice on terminoloy: I agree that "adversarial" should not be used here as it has a special meaning for the wider NeurIPS audience. Regarding other wording suggestions, I add no extra vote for or against.
train
[ "BKGqAO_Ye-h", "5w5ehCuIJFT", "7zOGEdWwNUE", "ixSAzRzzrJ", "8dGgN01Hwbl", "MGOogJlOvdH", "1GuEZ2jtB1r", "PQmNtS7KWiC", "DkhJCTT_1Sl", "hc0av-EJdtYz", "QPhvP7fFvkG0", "8F-t8eUpqopP", "eaHAgmV46_-", "UiBkByyU-hq", "qOETV4zKql" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the reviewer nGbx for further explanation, we provide some point cloud data[A] (mentioned by reviewer oe6G) and the reconstruction script[B] provided by NDF's author in these links below. If you are interested, you can try it for the reconstruction of the point cloud.\n\n[A] https://ufile.io/kdu0hoja\n\n[B] https://github.com/jchibane/ndf/issues/16\n\n\n", " Yes, it is a pleasure for us to share the generated dense point clouds (used in Fig 1,5,6) with you. The download link is provided in [A]. All the point clouds are generated following the official code released by NDF. The MeshLab meshing scripts are released by NDF's author in [B]. We use a BPA radius 0.01 (mentioned in sec 1.4 of supplemental material) instead of 0.005 in [B] since after lots of tests it demonstrates that the radius 0.005 would take a very long time to mesh even a single dense point cloud. In addition, in paper [C], they claimed: \"In our experiments, we have found the ball-pivoting process to be very sensitive to this radius, and in many cases, it had to be tuned per-shape\". We also found this in our experiments, so we would recommend experimenting with different radii of BPA for a more intuitive understanding of its limitations.\n\n\n[A] https://ufile.io/kdu0hoja\n\n[B] https://github.com/jchibane/ndf/issues/16\n\n[C] Deep Implicit Surface Point Prediction Networks (ICCV'21)", " We want to thank you for recognizing our work and for your insightful advice! We really appreciate that! And we will improve these wordings and expressions as you suggested. Thank you!:-)", " According to [MeshUDF: Fast and Differentiable Meshing of Unsigned Distance Field Networks] the meshing procedure is indeed a huge burden. Despite the point cloud being arbitrarily dense with NDF's gradient descent, it is slightly noisy. Meshing it with the ball pivoting algorithm appears to be extremely slow and to yield rough meshes.\nEstimating normals and using marching cubes (as suggested by reviewer oe6G) would only work locally for open surfaces.\n\nThis being said, I am also curious to see if an off-the-shelf point cloud meshing method can provide good results.", " Thanks for the response! \n\nAccording to my experience, after getting dense enough points, it's not hard to obtain high-quality meshes. In the NDF paper, they have also mentioned that \"Since we are able to efficiently extract millions of points, naive classical algorithms for meshing [9] (which locally connect the dots) can be used to generate high-quality meshes.\" \n\nWould you mind sharing the generated dense point clouds (used in Fig 1,5,6) for me to have a try?\n\n", " Two minor points:\n- I would avoid using the word \"adversarial\" in the context of deep neural networks. \"Respective shortcomings\" is much more precise.\n- By \"...\" I meant \"Unlike traditional SDF, HSDF is able to locate the surface of interest\". This is not a novelty of HSDF, NDF introduced it and it can be applied to any SDF.", " Dear AC and all reviewers:\n\nThanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper!\n\nSince we have only one day left in the discussion phase, and we have not heard back from anyone yet regarding their post-rebuttal response. Please don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. Thanks!", " We thank the reviewer for the insightful comments and suggestions! We provide our detailed response below.\n\n> Q1: Several recent works with similar ideas exist, such as [A]. It would be better if the authors could discuss and compare the differences.\n\nA1: Our work is concurrent to 3PSDF[A]. 3PSDF proposes an implicit representation that divides the space into three classes, namely positive, negative and null. Compared to watertight SDF, their null class enables 3PSDF to mask out certain regions in the space and represent open shapes. 3PSDF formulates the reconstruction task as a classification problem. It eases the learning difficulty but also limits the applications on downstream tasks that require SDF continuity, such as neural rendering. In contrast, HSDF can be adapted for neural rendering thanks to its continuous nature. Since they released their paper and code after our submission deadline, we didn't compare them in our main paper. We will add discussions with 3PSDF in our revision.\n\n[A] 3PSDF: Three-Pole Signed Distance Function for Learning Surfaces with Arbitrary Topologies (CVPR'22)\n\n\n>Q2: Discussion about robustness. The input of all comparison methods is a point cloud. Suppose the point cloud contains some unexpected noise, what about the performance drop or influenced by the perturbation of inputs?\n\nA2: We have evaluated it on the MGN test set. For every point $p$ in the input point cloud, the noisy $p'$ is calculated by the formulation $p'=p + \\sigma*(dx, dy, dz)^T$, where dx, dy and dz are sampled from standard normal distribution $N(0,1)$ and $\\sigma$ is a coefficient that describes the level of noise. And we use the pre-trained model checkpoint which is the same as our main paper. The evaluations on chamfer distance (x$10^{-4}$) are shown below:\n||$\\sigma=0$|$\\sigma=0.005$|$\\sigma=0.01$|$\\sigma=0.02$|\n|:-:|:-:|:-:|:-:|:-:|\n|NDF|0.158|0.203|0.247|0.376|\n|Ours|**0.151**|**0.199**|**0.242**|**0.362**|\n\nWe will add more robustness comparisons to our revision.\n\n> Q3: The notation in this paper might need further improvement.\n\nA3: Thank you for the advice! We will improve the notations in our revision.\n", " We thank the reviewer for the insightful comments and suggestions! We provide our detailed response below.\n\n## Main Questions\n\n> Q1: The main limitation is that learning a prior on triangle facets orientation requires datasets to be properly oriented, which might not always be possible. This huge limitation is only slightly mentioned in Fig. 7 but should be discussed more thoroughly and sooner.\n\nA1: As we mentioned in Section 4.1, all the training data can be robustly oriented using the released code of [A]. Another concurrent work, 3PSDF [B], also uses this work [A] for consistent normal orientation and achieves impressive results. Once the surface normal of training data is pre-processed, our method does not require any further processing for the test data. We will release our pre-processed data to encourage future research. In addition, more discussion on data processing will be discussed more thoroughly and earlier in the revision. \n\n[A] Repairing man-made meshes via visual-driven global optimization with minimum intrusion (SIGGRAPH Asia 2019)\n\n[B] 3PSDF: Three-Pole Signed Distance Function for Learning Surfaces with Arbitrary Topologies (CVPR'22)\n\n> Q2: The current formulation reads as if factoring SDFs into distance times sign was making the field easier to be learned by MLPs because it is more continuous. But in fact, the sign network is highly non-continuous.\n\nA2: We agree that the sign field is not continuous for open surfaces. However, **the discontinuity only happens in the regions that are far from the surface**. Our approach, on the other hand, only needs to learn an accurate sign field in the **vicinity of the surface**, as our distance field will guide our reconstruction to it (Section 3.4). In fact, we leverage an importance sampling strategy to focus our training samples around the target surface (Section 1.2 of the supplemental materials) -- 99% of samples are distributed in a narrow band around the surface where the sign field is highly continuous. Hence, our sign field is easy to learn.\n\n\nIn addition, in Figure 8, we provide an ablation study comparing the performance of an alternative method that does not factorize the SDF into sign and distance fields. The discontinuity of the sign field far from the surface increases the difficulty of joint learning and leads to erroneous reconstruction. This further verifies the effectiveness of our factorized learning. We will make this clearer in our revision.\n\n\n> Q3: The intro states that \"...\" , but this is not true, see NDF (Chibane 2020) for example. The proposed \"Masked Marching Cubes\" could be applied to normal SDFs, and should also be demonstrated in this case.\n\nA3: We are not sure what the \"...\" refers to. Here, we can only provide our response based on our limited understanding. We agree that the proposed \"Masked Marching Cubes\" can be applied to the normal SDFs to cast shapes with arbitrary topologies. However, the key question to ask is how to obtain such a mask only from a normal SDF. In our method, we leverage the gradient field of UDF to push a query point to its closest neighbor on the zero-level surface. The grid cell that the query point finally lands on is masked as valid for performing marching cubes (mesh extraction). However, for a normal SDF, as it can only represent a closed surface, it remains an open question how to extract a mask solely based on SDF such that it can help reconstruct an open surface that matches one's goal. Specifically, even we compute an unsigned field from an SDF by calculating its absolute value, the mask we obtain from its gradient field remains a closed surface. We are happy to provide more demonstrations if clearer instruction on how to obtain a mask from a normal SDF is provided. \n\n\n> Q4: Wording: I do not find the word \"hybrid\" well chosen to describe the proposed representation. \"Factorised\" or \"Disentangled\" would be more appropriate.\n\nA4: Thank you for your advice! We will consider changing the wording in the revision.\n\n> Q5: The first 3 lines of the introduction only mention SDFs and forget about Occupancy Fields.\n\nA5: We introduced Occupancy Fields in our related works, and we will also add this to our introduction in our revision.", " > Q6: Why is the propose \"sign and distance fusion\" procedure required? Which field is noisy? Both? The \"sign and distance fusion\" is not motivated by any intuition.\n\nA6: Neither of the fields is noisy. We introduce fusion optimization as an **\"registration of the zero level sets of the distance and sign fields\"**. We have shown this ablation study in Figure 9. The naive fusion of sign and distance is simply multiplying them. However, due to the separate learning strategy, the zero level set of the sign field won't be perfectly aligned with that of the distance field. This would slightly reduce the surface quality. Hence, we propose this fusion optimization to register these two zero level sets using the first-order information.\n\n> Q7: Tab.2 evaluates oriented normal consistency for methods that are not supervised for it. This is a biased assessment.\n\nA7: We use the oriented normal consistency (NC) to demonstrate that our method is able to reconstruct more consistent and accurate face normals (including their directions) for open shapes, where the other baseline method can't. To alleviate the concern of biased assessment, we provide additional evaluations on the original version of NC in the table below. All the results are reported on the test set with 3000 points as input. Due to the poor mesh surface quality reconstructed from dense point clouds using BPA, our method still outperforms NDF on the original NC metric.\n\n||MGN|Car|Chair|Ship|Lamp|Mean\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|NDF|0.938|0.817|0.865|0.850|0.869|0.868\n|Ours|**0.957**|**0.842**|**0.896**|**0.863**|**0.904**|**0.892**\n\nWe will add this result to the revision.\n\n> Q8: Some references about representing/meshing open surfaces with UDFs are missing, such as \"Neural Dual Contouring\" or \"MeshUDF\". \"DUDE\" is mentioned, but not used as a baseline in the experiment section.\n\nA8: We will add these missing references in the revision. \"DUDE\" hasn't released its source code. Hence, we use NDF (source code available) as our baseline to ensure fair comparison. If requested, we will implement \"DUDE\" in the revision to provide additional comparisons. \n\n> Q9: English quality could be improved.\n\nA9: Thank you for the advice! We will improve our writing in the revision.\n\n## Minor Issues\n\nWe thank the reviewer for the constructive detailed feedback concerning the mathmatical symbols, unclear statements, and detailed implementation of our work. The well-documented code and pre-processed data will also be released to assist the readers to reproduce and understand our algorithm deeply.\n\n> Figures always appear 1 page before their reference in the text.\n\nWe will fix it in the revision.\n\n> In 3.1, how is the surface normal v computed? does it assume the surface is meshed, and uses triangle winding numbers? This is a crucial aspect of the submission that should be made clearer.\n\nThe detailed computing process is introudced in Section 4.1 of the HSDF computation paragraph. We will add more details on normal computation to Section 3.1 in our revision.\n\n> Is Tab. 1 measuring meshing time only? or both network forward pass and meshing time? Is NDF using ball pivoting here?\n\nWe report the whole inference time including both forward pass and meshing time here. For the meshing of NDF, we use the MeshLab scripts released by the NDF's author to apply the BPA. \n\n> Loss term L_s : why clamp a binary sign value? I guess there is a typo here, what is the real loss term?\n\nHere the function **Sign(*)** returns a continuous value. We provide its definition in line 183. But it is indeed a typo -- there should be a two-directional clamp instead of a one-directional clamp. The correct loss term is rewritten below. We will fix this typo in our revision.\n\n$L_s = \\sum\\limits_{x\\in B}\\sum\\limits_{p\\in P}|max(min(Sign_x(p),\\delta),-\\delta) - max(min(SDF(p, S_x),\\delta),-\\delta)|$\n\n> While the introduction focuses on the generative/representation aspect of 3D shapes, the method section starts with an input point cloud. Could the shape representation be demonstrated in other settings, like auto-decoder?\n\nYes, HSDF is a general implicit representation that can scale to other application settings, including auto-decoder. Specifically, both the sign and distance fields can be learned in a auto-decoding fashion. We use sparse point cloud as input following the experiment settings of NDF mainly to ensure fair comparisons. We will explore more interesting applications in the future. \n\n> What are the \"adversarial impacts\" of SDF and UDF mentioned in the introduction?\n\nWe mean that their respective shortcomings demonstrated in the first 2 paragraphs of the introduction, which is: SDF can only represent watertight surfaces while UDF suffers from the meshing quality problem when converting the point cloud to mesh.", " We thank the reviewer for the insightful comments and suggestions! We provide our detailed response below.\n## Main Questions\n> Q1: We can use Marching Cubes on UDF and post-process the generated meshes to represent open surfaces.\n\nA1: Since there are no negative values in UDF, we can only use a small positive level value (instead of zero in SDF) to extract meshes using the Marching Cubes algorithm. This typically leads to two layers of surfaces around the target surface (the zero-level surface). To reconstruct a open surface as simple as a 2D rectangle surface patch, simply applying marching cubes would generate a **closed mesh that tightly encloses the surface patch**. It remains extremely challenging to identify which faces do not contain the target surface or generate a mask to convert the closed mesh back to an open one by only using the UDF, not mention more complex shapes that contain both closed and open surfaces.\n\nIn fact, the NDF [13] paper also recognizes the difficulty of using marching cubes to extract open surfaces from UDF, as we quoted from its introduction: *\"Most classical methods, such as marching cubes and volume rendering, find the zero-level set by detecting flips from inside to outside and vice versa, which is **not possible with UDF**.\"* Hence, we believe we are working on a non-trivial problem that is well motivated.\n\n> Q2: The dense point clouds from NDF can be used to estimate consistent normal direction accurately.\n\nA2: First, the UDF 1) suffers from the vanishing gradient problem on the surface (also pointed out by [A]) and 2) relies on a heuristic process to iteratively push the points onto the surface using the gradient field. These limitations make UDF vulnerable to shapes with intricate geometry details: the point-pushing mechanism is easy to get stuck in the local minima (also pointed out by [B]). Hence, UDF may not be able to generate accurate dense point clouds for complex shapes in the first place.\n\nSecond, even with an accurate dense point cloud, it remains an open question on how to estimate consistent and accurate normals for complex shapes with fine-grained details. This is due to the fact that dense point clouds don't include prior knowledge of face orientations in the dataset. This is manifested by our results in Figures 1, 6, and Table 2, where NDF fails to generate surfaces with consistent normal even if a very dense point cloud is used as input. In contrast, our sign field regression enables us to learn the correct face direction pattern from the training data. Therefore, our extracted meshes can have more consistent and accurate face normals compared to the NDF.\n\n[A] Deep Implicit Surface Point Prediction Networks, ICCV'21\n\n[B] 3PSDF: Three-Pole Signed Distance Function for Learning Surfaces with Arbitrary Topologies, CVPR'22\n\n> Q3: Why do we need a mask and how does it work on HSDF?\n\nA3: The signed field has been fused with the distance field as described in Section 3.3 to obtain our combined HSDF. As the signed field can estimate the inside/outside sign of query points for generating a zero-level set surface while the distance field can provide accurate point-to-surface distance, the fused HSDF can provide all necessary information for marching cubes to extract iso-surfaces. \n\nIf the marching cubes is naively applied without a mask, it will generate a closed surface. Hence, if we want to generate open surfaces using the marching cubes algorithm, we need to provide a mask to stop the marching cubes procedure from producing meshes in the unwanted regions. This mask is computed from our distance field which enables us to perform marching cubes only in the regions that is close to our target surface. We will improve our statement in the revision.\n\n> Q4: Figure 5, for the NDF method, do you use the dense point cloud generation technique introduced in their paper? If points are dense enough, we will be able to estimate consistent normals, apply marching cubes, and flip the faces as post-processing for BPA.\n\nA4: As discussed in Section 1.4 of the supplemental material, we follow the exact same dense point cloud generation process in NDF and use the MeshLab scripts released by the author to apply the BPA and post-processing (closing holes and re-orienting faces). As mentioned in the response to Q2, due to the limitations of NDF itself and the difficulty of estimating accurate normals from unordered point clouds, the computed normal for BPA is not accurate enough. This leads to bad meshing quality for NDF. Further, as there are too many disconnected triangles and self-intersections in the outcome mesh, post-processing methods, including closing holes and face re-orienting, cannot work well. In contrast, our method can convert HSDF directly into a mesh using the marching cubes without generating a point cloud. Hence, we are able to generate results with much better meshing quality.\n\n", " > Q5: In table 3, what's the meaning of only sampling 300 points. Much more points could be sampled to get a more accurate estimate.\n\nA5: We show the reconstruction evaluation from 300 points sampled mainly because: \n1) We follow the same experiment setting as both NDF and IF-Net to ensure fair comparisons -- they also showed evaluations for sampling 300 points as input.\n2) Evaluations in Table 3 focus on the watertight meshes without any complex inner structure, so the number of sampling points can be small.\n\n> Q6: In Table 2, it would also be great to include a direction-sensitive normal consistency (original version).\n\nA6: We have conducted the evaluations on the original version of normal consistency (NC) as shown below. All the results are reported on the test set with 3000 points as input. Due to the poor mesh surface quality reconstructed from dense point clouds using BPA, our method still outperforms NDF on the original NC metric.\n\n||MGN|Car|Chair|Ship|Lamp|Mean\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|NDF|0.938|0.817|0.865|0.850|0.869|0.868\n|Ours|**0.957**|**0.842**|**0.896**|**0.863**|**0.904**|**0.892**\n\nWe will add this evaluation to the revision.\n\n> Q7: It would be better to include more recent UDF approaches in comparison.\n\nA7: There are concurrent papers that work on UDF representation, including MeshUDF [A] and Neural Dual Contouring [B]. We will provide comparisons with these approaches in the revision.\n\n[A] MeshUDF: Fast and Differentiable Meshing of Unsigned Distance Field Networks, arxiv paper.\n\n[B] Neural Dual Contouring, SIGGRAPH'22.\n\n\n> Q8: I am impressed by the numbers in Table 1. However, I have no idea what causes the difference? Could you please give more explanation about the time and memory? It would also be great to include IF-Net in this table as well.\n\nA8: We wish to clarify that the \"Memory\" in Table 1 actually refers to the memory cost of storing the output mesh. We apologize for the potential confusion caused and will clarify it in the revision. We provide the explanation about the time and memory below, followed by additional experiments on timing and memory cost with IF-Net evaluated.\n\n\n1) Time: The ball pivoting algorithm (BPA) is a progressive method which picks a seed triangle and gradually grows the mesh using a rolling ball. Specifically, the ball connects all the vertices that it rolls over and stops when it falls off the boundary of the grown mesh. Such a method can hardly be accelerated in parallel as it would incur conflicts in the shared regions spanned by different seeds. In contrast, the marching cubes algorithm can interpolate triangles within each grid cell independently. Hence, it can be significantly accelerated using parallel computing, making it faster than the BPA approach. \n\n2) Memory: The NDF method requires dense point cloud in order to produce reasonably good results. To convert the dense point cloud to mesh, the BPA algorithm simply connects all the vertices without removing a single point. In contrast, the marching cubes algorithm utilizes regular grids to sample field values and extract the triangles for each grid. Considering the sparsity of the surface occupancy, the vertices of the resulting mesh can be much fewer than the grid points themselves. Hence, our method requires much fewer memory for storing the output mesh compared to NDF.\n\n\nWe also evaluate IF-Net on the same datasets including complex shapes. We report the quantitative results in the table below. IF-Net is faster than ours because it doesn't need to compute the masks for marching cubes. However, **IF-Net can't reconstruct open surfaces as we do**. In addition, IF-Net is prone to generate redundant artifact meshes when dealing with open surface, leading to larger mesh storage cost as shown in the table. \n\nInference time (forward pass + meshing):\n||$64^3$|$128^3$|$256^3$|\n|:-:|:-:|:-:|:-:|\n|NDF|89s|58m|780m|||\n|Ours|5s|17s|95s|||\n|IF-Net|<1s|3s|19s||\n\nMesh storage consumption:\n||$64^3$|$128^3$|$256^3$|\n|:-:|:-:|:-:|:-:|\n|NDF|17M|64M|1276M|||\n|Ours|1M|3M|10M|||\n|IF-Net|1.2M|5M|15M||\n\nWe will include these results in the revision.\n\n## Minor issues\n\n> Line 178: \"closest surface normal direction\" is confusing.\n\nWe will modify it to \"normal direction of its closest surface point\".\n\n> Line 183: What does $R_0$ mean?\n\nIt's a math notation for set {$ x|x∈R,x≠0 $}.\n\n> Line 186: Please explicitly explain $S_x$.\n\nWe define S and X at lines 159 to 161. And $S_x$ stands for the target surface of a certain input point cloud X.\n\n> Line 192: For the signed distance, why is the clamping in one direction?\n\nThanks for pointing this out! This is a typo which should be two-directional clamping. We will fix this typo in our revision.", " This paper proposes an implicit neural representation called Hybrid Sign and Distance Field (HSDF) for modeling surfaces with arbitrary topologies. It employs a neural network to regress unsigned and signed distance fields separately. It then fuses the two fields by taking the distance from the unsigned distance field and the sign of the signed distance field. In this way, the unsigned distance field allows modeling of open surfaces, and the signed distance provides the directions of the surfaces. They also simply modified the marching cube algorithm to a masked version to mesh their learned field.\n\nThe overall idea of this paper is simple and interesting. However, some motivations and arguments from the paper are problematic. The presentation quality of the paper could also be further improved. Strengths:\n1. I like the idea of modeling the surface by combing a signed distance field and an unsigned distance field, which enables arbitrary topology and surface direction modeling. \n2. The idea of using two network heads to regress two fields is reasonable, as the combined one is not continuous.\n3. According to Table1, the proposed method seems to be more time and memory efficient.\n\n\n\nWeaknesses:\n1. As stated in Line 30, the main motivation of the proposed method is that applying the marching cubes algorithm to UDF would convert all open surfaces into closed meshes. However, I don't think this point is valid, as we can post-process the generated meshes by Marching Cubes to represent open surfaces. Specifically, we can remove the faces in regions that do not contain the surface.\nAlternatively, we can also apply the masked marching cubes to UDF to only extract regions containing the surfaces.\nMoreover, NDF work has provided a method to output arbitrarily dense point clouds, which can then be used to estimate consistent normal direction accurately.\nAs a result, the point that \"marching cubes algorithm cannot be applied to UDF for open surfaces\" is very vague. I would highly suggest authors sell other benefits of learning a sign function and rewrite the paper in another way.\n\n2. In section 3.4, the role of the signed function is not described. How does HSDF enable marching cubes? Why do we need a mask version of marching cubes? These points are not stated in the text.\n\n3. Figure 5, for the NDF method, do you use the dense point cloud generation technique introduced in their paper? If points are dense enough, we will be able to estimate consistent normals, apply marching cubes, and flip the faces as a post processing for BPA. \n\n4. In table 3, what's the meaning of only sampling 300 points. Much more points could be sampled to get a more accurate estimate.\n\n5. In Table 2, it would also be great to include a direction-sensitive normal consistency (original version).\n\n6. It would be better to include more recent UDF approaches in comparison.\n\n7. The presentation could be improved a lot:\n\n(a) Line 178: \"closest surface normal direction\" is confusing.\n\n(b) Line 183: What does $\\mathbb{R}_{0}$ mean?\n\n(c) Line 184: $\\mathbb{R}^{3} \\mapsto[-1,1]$ should be {-1, 1} instead of an interval.\n\n(d) Line 186: Please explicitly explain $\\mathcal{S}_{\\mathrm{x}}$.\n\n(e) Line 192: For the signed distance, why is the clamping in one direction?\n\n(f) Line 238: watertight meshes -> watertight meshes and open surfaces? I am impressed by the numbers in Table 1. However, I have no idea what causes the difference? Could you please give more explanation about the time and memory? It would also be great to include IF-Net in this table as well.\n\n\n\n The authors have mentioned some limitations in the text.", " As opposed to SDFs, this submission proposes to regress sign and distance separately, and extend sign to oriented open (ie. non watertight) meshes. Authors demonstrate it can represent both open and close surfaces, and propose a way to mesh it and merge sign and distance fields that reduces inaccuracies of both components.\n Strengths:\n- In the experiment section, metrics include both a surface proximity (CD) and smoothness (NC) components, and are in favor of the proposed approach agains NDF.\n- Results look visually appealing.\n- Ablations demonstrate the effectiveness of the sign+distance fusion.\n\n\nWeaknesses:\n- The main limitation is that learning a prior on triangle facets orientation requires datasets to be properly oriented, which might not always be possible. This huge limitation is only slightly mentioned in Fig. 7 but should be discussed more thoroughly and sooner.\n- The current formulation reads as if factoring SDFs into distance times sign was making the field easier to be learned by MLPs because it is more continuous. But in fact the sign network is highly non continuous.\n- The intro states that \"...\" , but this is not true, see NDF (Chibane 2020) for example. The proposed \"Masked Marching Cubes\" could be applied to normal SDFs, and should also be demonstrated in this case.\n- Wording: I do not find the word \"hybrid\" well chosen to describe the proposed representation. \"Factorised\" or \"Disentangled\" would be more appropriate.\n- The first 3 lines of the introduction only mention SDFs and forget about Occupancy Fields.\n- The \"sign and distance fusion\" is not motivated by any intuition.\n- Tab.2 evaluates oriented normal consistency for methods that are not supervised for it. This is a biased assessment.\n- Some references about representing/meshing open surfaces with UDFs are missing, such as \"Neural Dual Contouring\" or \"MeshUDF\". \"DUDE\" is mentioned, but not used as a baseline in the experiment section.\n- English quality could be improved.\n\nMinor: figures always appear 1 page before their reference in the text. - In 3.1, how is the surface normal v computed? does it assume the surface is meshed, and uses triangle winding numbers? This is a crucial aspect of the submission that should be made clearer.\n- Why is the propose \"sign and distance fusion\" procedure required? Which field is noisy? Both?\n- Is Tab. 1 measuring meshing time only? or both network forward pass and meshing time? Is NDF using ball pivoting here?\n- Loss term L_s : why clamp a binary sign value? I guess there is a typo here, what is the real loss term?\n- While the introduction focuses on the generative/representation aspect of 3D shapes, the method section starts with an input point cloud. Could the shape representation be demonstrated in other settings, like auto-decoder?\n- What are the \"adversarial impacts\" of SDF and UDF mentioned in the introduction? Authors have identified mesh boundaries and consistent normal orientations as main limitations, but do not quantify it. For example:\n- would a metric computed only on mesh borders really show that boundaries are not well represented?\n- across the main 3D datasets, which ones have consistent normals? What are pathological cases (Moebius strip)?", " This paper proposes a hybrid distance field approach to represent 3D object surfaces, called HSDF. The main idea behind it is to utilize the advantage of the unsigned distance function and use another network head to predict the sign of the distance field for each point. The network structure is inherited from IF-net with additional heads. To extract the mesh from HSDF, the authors propose a masked Marching Cube algorithm to extract the iso-surface from a union of the local bounding boxes to handle the arbitrary topology of the 3D shape. The experiments conducted in ShapeNet demonstrate the HSDF achieves good performance compared with other methods. 1. The design of HSDF is simple but effective. The experimental result and ablation study also demonstrate its effectiveness.\n2. The motivation of the proposed approach is reasonable and the idea is easy to follow.\n3. The gradient optimization idea in Sec 3.3 is thought-provoking. 1. Several recent works with similar ideas exist, such as [A]. It would be better if the authors could discuss and compare the differences.\n\n2. Discussion about robustness. The input of all comparison methods is a point cloud. Suppose the point cloud contains some unexpected noise, what about the performance drop or influenced by the perturbation of inputs? \n\n3. The notation in this paper might need further improvement.\n\n a. $Sign$ in line 192 is a signed distance function, I think it should be $\\text{HSDF}(\\mathbf{p}, \\mathbf{X}) = \\text{sign}(Sign_\\mathbf{x}(\\mathbf{p}))*Dis_\\mathbf{x}(\\mathbf{p})$ in line 213.\n\n b. line 184 and line 186, the mapping should be a binary output {-1, 1} rather than an interval $[-1, 1]$?\n\n\n\n[A] 3PSDF: Three-Pole Signed Distance Function for Learning Surfaces with Arbitrary Topologies (CVPR'22)\n\n The authors have discussed the limitation adequately. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "ixSAzRzzrJ", "8dGgN01Hwbl", "MGOogJlOvdH", "8dGgN01Hwbl", "QPhvP7fFvkG0", "hc0av-EJdtYz", "nips_2022_Soadfc-JMeX", "qOETV4zKql", "UiBkByyU-hq", "UiBkByyU-hq", "eaHAgmV46_-", "eaHAgmV46_-", "nips_2022_Soadfc-JMeX", "nips_2022_Soadfc-JMeX", "nips_2022_Soadfc-JMeX" ]
nips_2022_cRNl08YWRKq
Obj2Seq: Formatting Objects as Sequences with Class Prompt for Visual Tasks
Visual tasks vary a lot in their output formats and concerned contents, therefore it is hard to process them with an identical structure. One main obstacle lies in the high-dimensional outputs in object-level visual tasks. In this paper, we propose an object-centric vision framework, Obj2Seq. Obj2Seq takes objects as basic units, and regards most object-level visual tasks as sequence generation problems of objects. Therefore, these visual tasks can be decoupled into two steps. First recognize objects of given categories, and then generate a sequence for each of these objects. The definition of the output sequences varies for different tasks, and the model is supervised by matching these sequences with ground-truth targets. Obj2Seq is able to flexibly determine input categories to satisfy customized requirements, and be easily extended to different visual tasks. When experimenting on MS COCO, Obj2Seq achieves 45.7% AP on object detection, 89.0% AP on multi-label classification and 65.0% AP on human pose estimation. These results demonstrate its potential to be generally applied to different visual tasks. Code has been made available at: https://github.com/CASIA-IVA-Lab/Obj2Seq.
Accept
The paper proposes an approach for formulating a few visual tasks as sequence prediction with class prompt. Reviewers are overall positive about the paper, especially the direction towards a unified vision model where the paper is exploring. However, it is also pointed out the paper should be more explicit about how the sequence is modeled with object queries and bipartite graph matching loss, which are significant differences from standard sequence modeling, as presented in language models, or Pix2Seq v1/v2. The authors should consider point out these differences in the abstract, Figure 1/2, to avoid misleading readers into thinking this is just like language modeling with autoregressive loss. Overall I’d recommend accepting the paper given it is a good attempt towards a unified vision model, but also encourage the authors to further improve the writing and clarify the sequence modeling part as mentioned above.
train
[ "zav4cScyGk1", "I4C9ueIQslD", "84hIRL-8_vg", "qVcwiyGtfcE", "3bjv2friH7a", "PIa0b71KPPU", "rgTLKCf2Uzb", "1ANpEBB26PA", "gV74kBO8xhH", "KWuYU3in46v", "ylHzrPXYu-V", "iwVa7TnHUNZ", "LkjsjbQ-RWz", "hRfuoR9g1p", "I2YT3QKJXSp", "2nBLDGYJB2O" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewers for their careful thoughts and kindly comments. We have updated the manuscript to emphasize more on how Obj2Seq is able to solve object-level visual tasks in a unified way, and therefore make our description more accurate and easier to understand. We will keep working to generalize Obj2Seq to more tasks, and construct a more comprehensive vision framework.", " We thank the reviewers for their careful thoughts and kindly comments.\n\nIf we take a fixed threshold to retain categories, the number of categories above this threshold vary for different input images. Some images may have tens of categories retained, while some have few. This is because the scores of class existence are calculated based on each input image, and independent for each category. They do not have a fixed distribution. Similarly, when we take the top-$K$ policy, we retain a fixed number of $K$ categories, even if some of them have a relative low score.\n\nMeanwhile, we have modified the description in our manuscript, and emphasize that Obj2Seq is a unified framework for object-level visual tasks. We will keep working to generalize this framework towards a wider range of other visual tasks, and construct a more unified framework in the future.", " We thank the reviewers for their careful thoughts and kindly comments.\n\nA 2-token transformer requires delicate design for different tasks (e.g., 4-d MLP layers for detection and 34-d MLP layers for keypoint). While in Obj2Seq, we utilize a unified sequence head without task-specific parameters. On one hand, task embeddings all share an identical format. The model behavior is able to keep unchanged for different task embeddings. On the other hand, the task embeddings can be further combined with pre-trained text feature in NLP to eliminate finetuning for new tasks in the future.\n\nHere we also implement the 2-token transformer head on human detection and keypoint. With the help of task tokens, 2-token head achieves higher performance than the simple MLP baseline. However, since Obj2Seq takes definite outputs from previous steps and utilizes them as inputs for subsequent steps, it is able to capture more explicit intra- and inter-task relations. Therefore, Obj2Seq achieves even better results. Moreover, this unified sequence format is consistent with text and audio tasks. It is more friendly to be extended for other multi-model applications. We have updated these results in the supplementary material.\n\n| Experiment | Epochs | $AP_{det}$ | $AP_{det}^{50}$ | $AP_{det}^{75}$ | $AP_{kps}$ | $AP_{kps}^{50}$ | $AP_{kps}^{75}$ |\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| Baseline | 50 | 53.7 | 78.6 | 58.9 | 57.2 | 83.3 | 63.7 |\n| 2-Token | 50 | 53.9 | 80.2 | 58.4 | 58.3 | 83.7 | 64.9 |\n| **Obj2Seq**| 50 | **54.4** | **80.3** | **59.4** | **60.5** | **83.9** | **67.3** |", " Thanks for the response, it clarifies most of my concerns.\n\nA minor point about Figure 3, I still think Figures 3(a) and 3(b) can share the same x-axis, if the metric for selecting top-K is the same metric for thresholding, which I believe it is. I don't understand why \"if one is fixed, the other changes with input images\", can I bother the authors to explain a bit more?\n\nOverall, I slightly raise my score from \"borderline reject\" to \"borderline accept\". The main reason is similar to reviewer `tovC`'s point, that the paper could bring some discussions around the object-centred architectures and open-world vocabulary tasks. However, I suggest claiming the \"unified framework\" carefully or adding some constraints like \"object-centric\".", " Since this paper is not in my area, my previous score is under-rated. After reading all the reviews, the previous related work like DETR, and author feedback, I decided to raise my score to 5: Borderline accept. However, as also mentioned by Reviewer 7jWi and Reviewer LxeJ, I still think the so-called \"unified framework\" is a little bit exaggerated. I hope the authors can add some constraints like \"for object-level visual tasks\" in the later revision. ", " I would like to thank the authors for the detailed response. The response addressed part of my concerns, but I still have some questions about the sequence generation framework:\n\n- I am still not convinced that the sequence interface can better generalize to other tasks. Since the sequence interface still needs extra task embeddings to indicate desired tasks, I think the 2-token transformers can also be extended to other tasks with additional task embeddings.\n\n- I still think the 2-token transformer is a very important baseline for the proposed sequence interface. It would be better to provide to the results of the 2-token transformer, which can be a useful reference for future research in this direction. ", " We appreciate the reviewers' kindly and thoughtful comments, these inspire us a lot. We would continue digging into these topics, and try our best to make contributions in the field of a unified vision interface.", " Thanks for the response. I raised the score.\n\nI like the discussion in the author response. I tend to prefer the paper to be accepted so that broader readers can discuss more on the topics of:\n1. Open-world tasks utilizing multimodal pretraining;\n2. Would object query architecture be necessary to get better performance for some object-level tasks? The discussion would be interesting regarding some recent related works as pix2seq v1/v2.", " We thank the reviewers for detailed comments. Here we elaborate on the Obj2Seq and its difference from Mask R-CNN.\n\nObj2Seq differs from Mask R-CNN in two aspects. On one hand, Obj2Seq provides a unified output format as sequences for object-level tasks, rather than only a multitask co-training framework. An identical prediction head in Obj2Seq is capable to perform different tasks, without introducing task-specific parameters. While in Mask R-CNN, we need several heads applying to RoIs, each for a certain task. On the other hand, Obj2Seq is able to be assigned with certain categories to obtain desired outputs. After pre-training with large datasets, we can adjust the model behavior during inference according to practical requirements without tuning, which is unreachable for Mask R-CNN. Last but not least, the sequence output and class prompts give us the opportunity and potential to form a unified network structure with NLP, speech, and even multimodality.\n\nHowever, due to the diverse output formats in computer vision, it is hard to unify image-level, object-level and pixel-level tasks in the same framework. Existing methods like [1,2] still need multiple heads for different vision and NLP tasks, and lack a unified output format. They bear more similarity with a multitask framework. In this paper, we mainly concentrate on object-level tasks, and format them as sequence prediction tasks. We will also generalize towards other visual tasks like pixel-level segmentation in the further research.", " We thank the reviewers for detailed comments. Here we elaborate on why the prompt indicator and object queries will not block further extensions.\n\n**The capability for new class discovery and open-world challenges**\n\nInstead of blocking the way to open-vocabulary applications, class prompts can help large-dataset training and real-world applications with existing text embeddings (e.g., BERT, CLIP). These embedding models are trained with large-scale text or image-text data. They can not only cover new categories, but also improve model performance with embedded semantic knowledge. In Table 4, we demonstrate CLIP embeddings can be directly taken as prompts to represent input categories. Therefore, it is highly flexible to modify the input prompts in practical utilities, and scale the label space and training data to obtain a large general model. Moreover, the modification made in some class prompts would not influence the results of other categories, since all prompts are processed independently in Obj2Seq.\n\n**Object queries provides a structured and unified solution for object-level tasks**\n\nA unified sequence prediction task is a promising direction to unify vision and NLP tasks, while high-dimensional outputs of sophisticated tasks can hardly be represented by a single sequence. Pix2Seq succeeds with object detection [1], but fails to generalize over keypoint detection and instance segmentation in [2]. It can only perform these tasks on object-centric crops. In object-related tasks, objects are appropriate units to organize features and output information. Therefore, we borrow the concept of object queries to better adapt Obj2Seq for object-level visual tasks, and formulate these tasks as sequence generation tasks for each object. On one hand, Obj2Seq provides a more structured output to better enhance object-level task performance. On the other hand, these tasks take a unified format so that they can be handled by an identical framework in an end-to-end manner. We will keep working to generalize Obj2Seq over other image-level and pix-level tasks, to reach the goal of a unified interface for all visual tasks in the future.\n\n**Some practical problems about Pix2Seq**\n\nPix2Seq [1,2] points out a promising direction to formulate visual tasks as sequence generation problems like NLP. However, it meets two problems. Firstly, Pix2Seq is not able to take additional knowledge as prompts, such as existing language priors pretrained on large-scale text data. Since Pix2Seq always generates a fixed object set, we are not able to use external knowledge to guide the detection of changeable categories. Therefore, it has limited ability in open-vocabulary detection. On the contrary, Obj2Seq can take CLIP embeddings as prompts, and output objects within an arbitrary category set. Secondly, introducing more tasks like keypoint detection and instance segmentation would result in even longer sequences and higher computation cost. In order to perform these tasks, Pix2Seq v2 crops each objects out in a compromised way. By contrast, Obj2Seq generates sequences for all objects in parallel, and performs these tasks in an end-to-end way.\n\n[1] Pix2seq: A language modeling framework for object detection, ICLR 2022.\n\n[2] A Unified Sequence Interface for Vision Tasks, arXiv 2206.07669.", " We thank the reviewers for detailed comments. Here we address them separately:\n\n**How Obj2Seq generalizes over object-level tasks**\n\nWe intend to build a unified framework for object-level visual tasks. Current methods like OFA mainly conduct experiments on tasks only generally describing an image (e.g., classification and text-image matching). Even though they utilize detection datasets for pretraining, OFA does not provide evaluation metrics on object detection. Therefore, it is still far from unifying visual tasks with various output formats. Among visual tasks, object-level outputs usually contain sophisticated and high-dimensional information which can hardly fit into a single sequence. Obj2Seq concentrates on these tasks. We format these tasks as sequence generation problems for each single object. This works as a unified interface for object-level tasks. We take object detection and human pose estimation as examples to demonstrate its efficacy. When new tasks are encountered, Obj2Seq adapts by modifying the definition of the output sequences only. We will keep supporting more tasks with Obj2Seq, such as pixel-level segmentation, and release our code for public.\n\n**Difference from DETR-family and visual grounding tasks**\n\nObj2Seq works as a unified interface for various object-level visual tasks, and borrows the idea of object query from DETR-family and M-DETR to extract better object-related features. However, by predicting sequences as outputs, Obj2Seq is able to generate outputs for various tasks, more than just object detection. As to visual grounding, our problem set is different in two aspects. On one hand, visual grounding takes a detailed description as the input, and identifies only one specific object in the input image for each description, while Obj2Seq detects all objects belonging to the input categories. On the other hand, visual grounding only outputs the bounding box of the referred object, while Obj2Seq is able to generate any required description in an output sequence.\n\nWhen evaluating on COCO, Obj2Seq takes a fixed set of 80 categories as input prompts for fair comparison with other detection frameworks. However, Prompted Visual Indicator leaves us an interface to interfere the model inference with changing category group, thus satisfying varied real-world requirements. This idea is somewhat inspired by visual grounding tasks, and we will further generalize this visual indicator with more detailed text data.\n\n**Fairness about the retention policy**\n\nThe retention policy has minor impact on the model performance. As the blue line in Figure 3(a), there is minor difference between the accuracies with the best policy (45.7, $K'=20$) and without a retention policy (45.5, $K'=80$, i.e., remain all categories). Both of them perform better than previous works in Table 1.\n\nObj2Seq achieves improvements in accuracy from two aspects. The first is that class prompts help object queries focus on corresponding categories. The second is that a sequence structure can better perceive relations among different output steps. These have been validated in Table 3. \n\n**Implementation details about object queries**\n\nObj2Seq generates object queries based on the input image and categories. It first filters and maintains several top categories that mostly likely exist (assuming $K'$ categories), and then generates $N$ object queries for each retained category ($K'N$ queries in total). In our experiments, we set $N=100$. For more detailed configurations, please refer Appendix A.1.\n\n**Clarification of variables in Figure 3**\n\nThese two variables, the number of remained categories and the threshold to remain, do not have a fixed mapping relationship. If one is fixed, the other changes with input images. In Figure 3(a) and 3(b), we demonstrate Obj2Seq is able to achieves steady performance no matter we control the retention policy by number or threshold.\n\n**The capability for new class discovery and open-world challenges**\n\nIn Table 4, we validate that Obj2Seq can also directly use CLIP embeddings as input prompts, which makes it capable for large-dataset and real-world open-vocabulary challenges. Prompt indicator is designed to assign this general model some specific categories according to practical requirements. The input prompts can be changed flexibly, and Obj2Seq then generates corresponding output sequences. We will also extend the indicator to further support more various and detailed prompts and combine it with text data.\n\nMore specifically, though experiments on COCO takes a fixed set of 80 categories, each category is processed independently in Obj2Seq. We can flexibly modify some prompts without influencing others.", " We thank the reviewers for detailed comments. Here we address them separately:\n\n**The design of object queries, and how Obj2Seq generalizes over object-level tasks**\n\nObj2Seq intends to construct a unified framework for object-level tasks. These tasks usually require highly-structured outputs. Therefore, we adopt object queries and bipartite matching loss, in order to extract better object-related features and achieves SOTA results. These designs are inspired by DETR. However, Obj2Seq unifies the output format of each object as a sequence, which is ready to generalize to different object-level tasks. Meanwhile, Obj2Seq performs all object-level tasks in an end-to-end way, which is more simple and friendly than previous unified framework, Pix2Seq v2. The latter needs to crop each object out first to conduct sophisticated tasks in a compromised way. Currently, Obj2Seq is mainly specialized in object-level tasks. We will keep working to generalize it towards other tasks, such as pixel-level segmentation.\n\n**Necessity of the sequence generation framework under the two-step design**\n\nThe sequence prediction task is a promising direction to unify vision and NLP tasks. Currently, the main obstacle lies in the difficulty to format diverse visual task outputs in an identical sequence. Instead, Obj2Seq adapts the original auto-regression framework for object-level tasks. It formats each object as a sequence, which provides a more structured and unified interface. On one hand, a sequence is a general format to describe an object. It can be adapted for different tasks. On the other hand, organizing the outputs by objects provides more explicit supervision, leading to better performance. The experiments on object detection and human pose estimation validate that Obj2Seq is capable to generalize over object-level tasks, and achieve SOTA results. Ablation study in Table 3 further indicates that the sequence format can even lead to better performance. This object-sequence interface is ready to be extended to other object-level tasks, and we are going to further adapt it for other image-level and pixel-level tasks.\n\nAs to the option of a 2-token transformer, it is much simpler for detection and keypoint only. However, adding new tasks requires introducing more query tokens and new MLPs, which makes it less friendly to general extensions.\n\n**The potential to perform multiple tasks with a unified framework**\n\nThe inference pipeline of Obj2Seq keeps unchanged for different object-level tasks, while Pix2Seq needs to crop objects out for tasks like keypoint detection. Our identical and end-to-end pipeline makes Obj2Seq more friendly for new task extension and multi-task training. When performing different tasks, we only need to change required prompt inputs and interpret output sequences according to specific tasks. For example, we can train people detection and keypoint together, and achieves 57.3 mAP and 65.0 mAP.\n\nCurrently, we are trying to combine general object detection and person keypoint. By simply combining a batch for each task together, we obtain 44.7 mAP on detection and 54.2 mAP on keypoint. Since all objects have bounding box, but only people have keypoint annotations, and their data amounts differ a lot, the training configuration requires delicate tuning to balance different tasks. We believe after tuning a single unified model is able to achieve comparable performance with single-task models.", " This paper presents a new object-centered framework to unify visual tasks including object detection, key point detection, and multi-label classification. Different from the previous Pix2Seq method, this paper divides the sequence generation problem into an object query generation sub-problem and an attribute sequence generation sub-problem. The method achieves good results on multiple visual recognition tasks on COCO. Strengths:\n\n- The idea of decoupling the sequence generation process into two steps is new and insightful. \n\n- The object query generation process allows the use of bipartite matching loss for supervision, which improves the optimization of the original pix2seq and helps the framework achieve competitive results with state-of-the-art methods.\n\n- The method is evaluated on multiple popular benchmarks and achieves good results. Ablation studies in Section 4.4 make the paper solid.\n\nWeaknesses:\n\n- Since the method uses bipartite matching loss from DETR [4] for supervision and generates many redundant queries (100 queries for each candidate category according to Appendix A) like [4], I think the overall framework can be viewed as an improved DETR with class conditioned query generator, which is close to several previous methods like [28, 43, 26, 49]. The object-centered design also makes the method less general compared to the concurrent work pix2seq v2 [r1] which can be extended to the image captioning task.\n\n- The two-step design makes the necessity of the sequence generation framework questionable. The sequence generation task in Pix2Seq is design to unify vision and NLP tasks. However, due to the existence of the Prompted Visual Indicator and bipartite matching loss, the proposed method is totally different from sequence-to-sequence autoregression framework in NLP. Therefore, is that possible to further simplify the General Sequence Predictor as a 2-token transformers model where the object query is combined with detection and keypoint task tokens as the inputs and use MLP heads like DETR to obtain the final prediction? \n\n- The paper emphasizes a unified framework is designed for various visual tasks. However, the experiments didn't show the potential of the proposed framework to perform multiple challenging tasks like [r1]. Is it possible for the proposed framework to perform general object detection and keypoint detection with a single model? Will the model benefit from joint training?\n\n[r1] A Unified Sequence Interface for Vision Tasks, arXiv 2206.07669 Overall, I think this is a good paper with several new and insightful technical contributions and decent results. Although I still have concerns about some designs, I think the proposed object-centered framework might be a new and interesting direction for several visual recognition tasks. Therefore, I lean toward accepting this paper. This paper can be stronger if the questions mentioned in the Weaknesses subsection can be addressed. The limitations and potential societal impact of the method have been discussed. ", " The paper introduces a unified architecture similar to M-DETR, but takes object class labels as input, and outputs three components in the same architecture: \n1. the existence of object for multi-label classification \n2. the box coordinates for object detection \n3. keypoint coordinates for human pose estimation. \n\nThe key component of this unified architecture is the sequence decoder, from which the first four outputs are always trained to be box coordinates and the rest outputs are trained to be human pose keypoints. \n The novelty of the paper is limited. The paper claims a \"unified framework\" (e.g. Figure 1) and \"a wide range of vision tasks\" (L91) but the experiments only show the tasks of outputting spatial coordinates (box coordinate and keypoint). While some recent work OFA (I’m aware it was not officially published at the submission time) shows a seq2seq framework can be applied to a much wider range of tasks including detection, caption and VQA. Overall I mean more vision tasks are required to support the claim of a “unified framework”. Second, the authors design an architecture that is similar to DETR-family, and the previous work M-DETR uses the architecture for language grounding tasks. This paper takes the class labels as input and the model outputs the bounding box, which feels more like a simplified version of a visual grounding task rather than object detection. Can the authors elaborate more on the difference between the proposed setting and visual grounding tasks?\n\nThe overall quality of the paper is good. The experiments are clearly explained and discussed. However, I think the comparison with previous works (Table 1) is unfair since the method uses a retention policy to pre-filter some probable object categories while the normal detection method does not benefit from this. Can the authors discuss this comparison more?\n\nIn general, the paper is very clearly written and well organized. The figures are clear and easy to understand. \n\nThe experiments in object detection show the proposed method outperforms previous methods but due to the retention policy (mentioned above), I doubt the fairness of this comparison and more discussion is required. The experiments in human pose estimation show the proposed method is effective when compared with previous works. \n 1. Are the Figure (3) a & b actually the same figure? I.e. the x-axes have one-to-one mapping and can be plotted in the same figure. \n2. What if there are multiple objects of the same category in the image? From Figure 2 it looks like there are 3 object queries l for each class embedding output. Does it work if the image has more than 3 people? Could the authors add more details about how to handle these multi-instance cases? \n The paper discusses three limitations: (1) not exploring the design of the feature extractor, (2) not exploring other tasks, (3) only taking object class as input and not exploring more detailed indicators. I agree with these limitations as potential future works, but I think point (2) has a higher priority given the paper claims a \"unified framework\". \n\nAdditionally, I think one limitation of the model is that the object categories are limited by the input categories, and the model cannot discover new categories. \n", " This paper is a nice try towards unified interface for various object related computer vision tasks. It use image and class as prompt, achieving SOTA detection and pose estimation performance in the same interface. The architecture designs of the Prompted Visual Indicator and Sequence Predictor are sound. The major debates lie in the usage of class as prompt and the object-level query. These two might be beneficial to get SOTA object-level tasks, but might also be the blocker for more general vision task interface. Strengths:\n1. This effort addresses an ambitious goal of getting a unified interface for computer vision tasks. Obj2seq unifies object detection, pose estimation and image classification.\n2. The designed architecture achieves SOTA performance on object detection and pose estimation.\n\nWeaknesses:\n1. Class as prompt blocks its way to generalize to open-vocabulary applications. The goal of the unified interface is to utilize the scaling law of large scale model to utilize large dataset and address real-world open-vocabulary challenges.\nDebatable:\n1. The architecture keeps the object query, while pix2seq has already gotten rid of that. This is debatable. I don't have an obvious answer whether that part is necessary to achieve SOTA performance on several object-level tasks considering object is the unit that interaction or reasoning works on. The limitation of Pix2seq mentioned in this paper that \"Pix2Seq is not aware of the desired targets before inference, and the output sequence might become extremely long under sophisticated scenes.\" This is not convincing enough. We could argue this actually shows the flexibility of Pix2seq. May I bother the author to elaborate more on this? What are the specific scenarios Pix2Seq may meet trouble and Obj2Seq is better. Yes, the author mentioned the limitation of this work.\nClass as prompt is not only a limitation to satisfy finer requirements, but also cause trouble on open-vocabulary applications.", " This paper proposes a new framework for vision tasks, called Obj2Seq. It takes prompts of classes as input then filters the non-exist categories and keeps the most confident K' categories for each image by the Prompted Visual Indicator module. After that, they will be passed to the Object Transformer Decoder module together with the encoded image features to get the object queries. The final General Sequence Predictor takes these queries as input to generate a sequence of desired results, e.g., the location (x, y), size (w, h), or even the other for fine-grained information like keypoints (x_nose, y_nose, x_eye1, y_eye1, .....). The experiments on MS COCO demonstrate that the proposed Obj2Seq can achieve comparable results with other task-specific methods.\n [+] This paper proposes a new framework for object-level visual prediction tasks, e.g., object detection, multi-label classification or human pose estimation.\n[+] According to the experimental results, the proposed Obj2Seq can achieve comparable performances on each task with their task-specific methods.\n\n[-] However, I don't think Obj2Seq is a unified vision framework for most vision tasks, which is mainly designed to solve the object-level prediction. In my opinion, the capacity of the proposed Obj2Seq is equal to the conventional Mask R-CNN framework, because Mask R-CNN is also able to equip new prediction heads based on the downstream tasks. Therefore, I prefer to consider Obj2Seq as another version of \"Mask R-CNN\" rather than a unified framework for vision tasks. To better understand what a unified vision framework is supposed to look like, please refer to the [1, 2], which are capable of both localization tasks and understanding tasks. Note that I'm NOT suggesting the authors to compare the proposed Obj2Seq with these methods, as [2] and this paper are contemporary works. What I want to say is that the proposed Obj2Seq is far from a unified vision framework. \n\n\n[1] Lu, Jiasen, et al. \"12-in-1: Multi-task vision and language representation learning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.\n[2] Zhang, Haotian, et al. \"GLIPv2: Unifying Localization and Vision-Language Understanding.\" arXiv preprint arXiv:2206.05836 (2022). According to the authors, the main contribution of this paper is to provide a unified visual framework, but I only perceive it as an object-level prediction framework, which is equivalent to the conventional Mask R-CNN framework in terms of potential capacity. Therefore, I want to know some specific examples of tasks that can only be formulated by Obj2Seq but not Mask R-CNN during rebuttal. The authors have already addressed the limitations." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 1 ]
[ "3bjv2friH7a", "qVcwiyGtfcE", "PIa0b71KPPU", "ylHzrPXYu-V", "gV74kBO8xhH", "iwVa7TnHUNZ", "1ANpEBB26PA", "KWuYU3in46v", "2nBLDGYJB2O", "I2YT3QKJXSp", "hRfuoR9g1p", "LkjsjbQ-RWz", "nips_2022_cRNl08YWRKq", "nips_2022_cRNl08YWRKq", "nips_2022_cRNl08YWRKq", "nips_2022_cRNl08YWRKq" ]
nips_2022_1ryTomA0iKa
Riemannian Neural SDE: Learning Stochastic Representations on Manifolds
In recent years, the neural stochastic differential equation (NSDE) has gained attention for modeling stochastic representations with great success in various types of applications. However, it typically loses expressivity when the data representation is manifold-valued. To address this issue, we suggest a principled method for expressing the stochastic representation with the Riemannian neural SDE (RNSDE), which extends the conventional Euclidean NSDE. Empirical results for various tasks demonstrate that the proposed method significantly outperforms baseline methods.
Accept
There was a consensus towards weak acceptance among all the reviewers, and I agree with this consensus. This paper solves an important problem of applying SDEs to manifolds. It is clearly written, and all the reviewers agree that the claims are well-supported by strong experimental results. On the other hand, this clarity of writing perhaps relies overmuch on familiarity with the area, and some effort should be made to smooth out the presentation for a general NeurIPS audience. Beyond this, the weaknesses pointed out by the reviewers were well addressed by the author response, including an additional experimental result that provides a comparison to Moser Flow: there is no strong unaddressed weakness that would merit rejection. I think that the manifold learning community, as well as the Neural ODE community more broadly, at NeurIPS will find this work interesting and useful, as it expands the range of methods they can apply on manifolds. As such, I lean towards accepting this paper.
train
[ "Xmgfi7voG3H", "br3zpxZq_d", "HK1_M-bqGWw", "qI_A1GZmXJa", "20RnQXosm8P", "7cpUOuIXL8M", "8Ivqt_XIt7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are glad to inform that every reviewers appreciated the contributions and acknowledged the strength of our paper. In this section, we compare our model with prior work [D] and give a brief explanation and comparison. The contents in this comment will be added in an additional content page for the camera-ready version. \n\n$\\textbf{[Comparison to the prior work]}$\n\nIn [D], they suggested the time-reversed diffusion process and the corresponding score-matching framework for generative modeling. In contrast to our approach, they proposed a geodesic random walk to propagate the stochastic dynamics which naturally ensures the intrinsic geometric structure. Then, the denoising score-matching was applied by eigen-decomposition of the transition probability as a heat kernel with respect to the Fokker-Planck equation in Appendix (10). In generative modeling, our method follows a different methodology while our interest lies in matching the endpoint of the probabilistic state by setting a static bridge via log-Sinkhorn.\n\n------\n\n[D] Riemannian Score-Based Generative Modeling, 2022, preprint.", " Thank you for pointing out the strengths of our paper. Moreover, we thank the reviewer for the detailed review and insightful comments. Below, we addressed the comments raised by the reviewer.\n\n$\\textbf{[Computational Complexity, Scalability]}$\n\nDue to the computational complexity possibly raised by geometric objects, we restricted our interests to the case where two geometric objects are analytically tractable. Especially, we require two geometric objects including $\\textbf{Riemannian metric tensor}$ $\\{ g_{ij} \\}$, and the $\\textbf{generalized distance function}$ $c(x,y)$ (called a cost function in optimal transport literature). The former is used to define the proposed dynamical system on manifolds, and the latter is for the dual optimal transport problem. Thankfully, the additional computational costs are insignificant during the training and inference time as we priorly compute these objects using the mathematical definition introduced in Appendix (59) and (60). To address the scalability issue, we set an additional experiment by increasing the dimensionality for the product copies of manifolds, $\\prod_{k}^K (\\mathbb{S}^2)^{k=1} = \\mathbb{S}^2 \\times \\cdots \\times \\mathbb{S}^2$. The following table shows the average training time per epoch given the different dimensionality $K \\in \\{1, 4, 8, 16\\}$.\n\n|K=1|K=4|K=8|K=16|\n|--:|---|---|---|\n|0.82|1.16|1.83|2.07|\n\nAs shown in the table, the additional computational burden is less than linear growth. We checked that the number of temporal states for discrete Euler-Maruyama Scheme is the primal factor that causes high computational costs as similar to conventional neural dynamical models.\n\n$\\textbf{[Geometric Constraint by Projection]}$\n\nIf one projects the representation of Euclidean neural SDE in (1) onto the sphere by using length scaling $X'_t = P(X_t) \\coloneqq X_t / || X_t ||_E$, the diffusion part in the second term is suppressed. Eventually, the projected representation $X'$ may lose the diffusive property if the method continues the projection during the stochastic propagation (Euler-Maruyama scheme). This argument can be easily shown by the Ito's formula: \n\\begin{equation}\n dX'_t = [ \\textbf{ drift } ] dt + \\beta D^2 P(X_t) dW_t \\approx [ \\textbf{ drift } ] dt + \\beta D \\left( \\frac{1}{\\||X_t||_E}\\left( I_d - \\frac{\\langle X_t, X_t\\rangle}{ ||X_t||_E^2}\\right) \\right)dW_t,\n\\end{equation}\nwhere $D^2$ and $D$ are Euclidean differentials of orders $2$ and $1$, respectively.\nNote that we regard $X_t$ as a solution to Euclidean SDE with respect to Ito's integral for simplicity. Since the derivative in second term is approximately $\\approx O(||X||^{-2})$, the diffusive behavior is suppressed after projection. \n\n\nThere exist only few known global maps such as length scaling that can project vectors in the ambient Euclidean space $\\mathbb{R}^3$ to a specific type of model manifolds such as a sphere $\\mathbb{S}^2$. For the generality, the key point in applying manifold theory to neural SDE is to define Riemannian counterpart in the local sense.\n\nIn the literature [C], they already proposed a similar but concrete concept that applies the projection method to respect the geometric constraint $||X||_E = 1$. In particular, they defined the diffusion on the sphere by using a global projection that transforms Euclidean vectors onto the spherical tangent space (not the model sphere) and enjoys the diffusion property:\n\n$dX_t^i = \\left( \\delta_{ij} - X_s^i X_s^j \\right) \\circ dW_s^j, \\quad X_0 \\in \\mathbb{S}^2$\n\nNote that the above spherical Brownian motion corresponds to a specific case of the second term in (2) to construct the diffusion on the sphere. Contrary to the above example, we adopt the general point of view to define the diffusion process as horizontal representations.\n\n$\\textbf{[Effect of Stopping time in practice]}$\n\nWe empirically selected a hyper-parameter $c_1$ in (51) by checking the probability of occurrence of infeasible numbers in experiments. In general, $c_1 = 10^3$ was enough to stabilize the training scheme. We observed that the co-metric tensor occasionally produced infeasible values ($\\approx 10^8 \\gg c_1)$ with a low probability in the spherical experiments. If these unexpected values are fed into the proposed RNSDE, then it quickly falls into failure mode during the Euler-Maruyama scheme. As aforementioned in Appendix L186~196, this phenomenon is predictable because of the specific form of suggested metric tensors. \n\n--------\n\n[C] An introduction to the analysis of paths on a Riemannian manifold, Daniel W. Strook\n\n", " Thanks a lot for your valuable feedback. Also, we thank you for specifying the strength of our paper. We revised our manuscript accordingly and provided a detailed response to the raised comments.\n\n$\\textbf{[Comparison to the prior work]}$\n\nFor overall discussion, we clarified an elaborate comparison with prior work in the [general response] section. We specified the difference in motivation, technical solutions, etc in the section and our main paper. Please kindly refer to the general response section.\n\n$\\textbf{[Additional Experiment]}$\n\nAs the reviewer requested, we set an additional experiment including the result of Moser Flow on the density estimation task. For the experiment, we thoroughly utilized the original experimental setting of Moser Flow. To train Moser Flow, 6 hidden layers of MLP with 512 neurons was parameterized, and training epochs and learning rate were set to be identical to the original settings. We referred to experimental settings from their public code implementation [https://github.com/noamroze/moser_flow]. \n\n|Methods | 8-shapes | Two moons | Spiral |\n|--:|---|---|---|\n| Morser Flow | 5.81 | 6.10 | 7.56 |\n| RNSDE | $\\mathbf{5.67}$ | $\\mathbf{5.66}$ | $\\mathbf{6.97}$ |\n\nWe evaluated the density estimation performance with the $2$-Wasserstein distance $\\mathcal{W}_2$ ($\\times 10^{-2}$) on each synthetic dataset. The above table shows that the proposed method still outperforms the Moser flow. As shown in Table above, Moser Flow performed well, especially on \\textit{8-shapes}, but still our model surpassed the method on other densities. In the revised manuscript, we will add the results of Moser Flow.\n", " \nWe thank the reviewer for their kind feedback on our work. We addressed all the comments in the updated version of our manuscript and provided a point-by-point response below.\n\n$\\textbf{[Comparison to the prior work]}$\n\nFor a comprehensive discussion, we provided a detailed comparison in the [general response] section. Please refer to the general response section.\n\n$\\textbf{[Concerns about irregularity]}$\n\nIn the vessel route experiment, time stamps are essentially $\\textbf{non-uniform}$ and $\\textbf{irregular}$ as the dataset contains information of vessel navigation information (e.g., geographic coordinate) acquired from separate time scales. To make a reconstruction using this dataset, our method supports the irregular temporal setting. We followed the identical protocol suggested in the previous work [A] to train baselines where the irregular events were the standard assumption. Since their model (CSDE-TP, [A]) is defined on the ambient Euclidean space $(\\mathbb{R}^3, d_E)$, we substitute the Euclidean distance to the Riemannian counterpart $(\\mathbb{S}^2, d_{\\mathbb{S}^2})$.\n\n$\\textbf{[Empirical Validation for Representability]}$\n\nMathematically, any points on the unit sphere have a unit norm. Thus, the following condition should be satisfied if the neural dynamics model respects the underlying geometry of the sphere.\n\\begin{equation}\n X = [x^1, x^2, x^3]^T, \\quad \\sqrt{ (x^{1})^2 + (x^{2})^2 + (x^{3})^2} = 1,\n\\end{equation}\nwhere $X_t = [x_t^1, x_t^2, x_t^3]$ are the network outputs at time interval $t \\in [0, T]$. This mathematical constraint is somewhat artificial while neural networks generally produce vectors in $\\mathbb{R}^3$. In this regard, prior works applied the additional projection function to impose the global structural constraint to satisfy the constraint. Regarding the discussion to train baseline Euclidean methods, we utilized a function called a $\\textbf{stereographic projection}$ that projects the vectorial network outputs onto the sphere. After the projection procedure, we trained the baseline methods (Latent ODE, ODE-RNN) by replacing the usual Euclidean distance with the Riemannian one to calculate a Gaussian-type log-likelihood suggested in [B]. The key point in this experiment is whether prior works perform well even after the geometric projection. Table 4 shows that our method outperforms prior works and Euclidean opponents may lose expressivity on the sphere.\n\n-------\n\n[A] Neural Markov Controlled SDE: Stochastic Optimization for Continuous-Time Data, ICLR 2022.\n\n[B] Latent Ordinary Differential Equations for Irregularly-Sampled Time Series, NeurIPS 2019.", " This paper proposes a method to express the stochastic representation over manifolds, using a novelly defined Riemannian Neural stochastic differential equation (RNSDE), the method is theoretically well-analyzed at each step, empirically RNSDE outperforms several baselines on some simple tasks. I’m not an expert on neural SDE, so some points maybe stupid :)\nS1: the method is theoretically sound in most steps, except the learning step and some approximations;\nS2: the extension from Euclidean neural SDE to Riemannian Neural SDE makes it easer to understand and follow;\nS3: Empirically the proposed RNSDE outperforms some previous baselines on tasks including generative modeling, interpolation and reconstruction. \n\nW1: I think this paper is not that ‘readable’ for general NeurIPS audience, but only experts in this area, currently it’s heavy math, I would suggest add more explanations/backgrounds and intuitions to the text, for example the Eells-Elworthy-Malliavin interpretation of the diffusion process, and what each step in the Scheme is doing, this could significant help abstract more readers into this area. \n\nW2: RNSDE incurs more computations than the Euclidean case, including metric tensors, christoffee symbol and etc, which has to be computed for every local neighborhood as moving along the path, I would like to see some computation burden analysis or empirical results of RNSDE, plus compared to some baselines, particularly to one I mention in next question part. \n\nW3: The practical usages/applications of this method looks rare to me, what’s more, the manifolds are also simple, though S^n is claimed to be one model, I only see n=2 case, and with a torus model, is it possible to test on high dimensional manifolds? Q1: I understand that Euclidean Neural SDE gives results not on the manifold, but a very naive approach is to enforce a hard constraint (e.g. projection to the manifold) after each Euclidean neural SDE step, or something as a first order retraction? One can also move along local frameworks to get a trajectory on the manifold, do you include this baseline to compare with? Both performance and computations\n\nQ2: I’m curious what’s the choice of the stopping time tau could affect in practice? How large it needs to be, how severe the gradient explosion is and how to choose this value? Mostly discussed above, the practical usage seems to be rare, and the computations are heavy, limited to a small set of manifolds and low dimensions. ", " This paper introduces a neural network method to learn and represent stochastic differential equations on manifolds. Applications are presented for density modeling on manifolds and learning latent paths on manifolds. Strengths\n-----------\n+ The paper is presented very well. I appreciated the clear presentation, precise mathematical language, thorough writing, and informative figures.\n+ The paper is very technically solid, as the core difficulty of extending Neural SDEs to manifolds is aptly handled with the correct Riemannian-geometric constructs.\n+ The experiments are pretty well done, and the Vessel Route dataset is an interesting contribution to this space.\n\nWeaknesses\n--------------\n- There is work that slightly predates this one on extending SDEs to manifolds in the context of scored based generative models and diffusion (https://arxiv.org/abs/2202.02763). While I believe that the two works are different, the above work should be included in the discussion.\n- For the density modeling experiments, some baselines are missing. In particular, I believe there should be a comparison with Moser Flow.\n\nVerdict\n--------\nI generally lean on accepting the paper, as I think that it is a solid contribution to the space. However, I would want some more discussion with the baselines and previous methods mentioned above (as I think they sufficiently predate the work). My questions are covered in the weaknesses section. In particular, I would like to see a comparison with Moser Flow and a discussion with the Riemannian Score Based Modeling Paper. Yes", " The submission proposes an approach for modeling stochastic differential equations on Riemannian manifolds using neural networks called Riemannian neural SDE (RNSDE). This approach allows for learning a probability distribution over a manifold and modeling stochastic processes happening on the manifold. RNSDE uses neural networks to model the terms in the local representation of an SDE and allows for probability density estimation using the dual metrics formulation. Experiments show that RNSDE outperforms some of the related deep learning baselines in a set of illustrative toy tasks and in several real-world benchmarks. ### Originality\n\nAccording to the provided literature review, the submission proposes a novel approach to modeling SDEs on manifolds. Authors clearly state the difference between their method and related works, highlighting that RNSDE requires less trainable components to model evolution of the density and claiming that it provides potentially more flexible solution than some other restricted methods. However, the literature review does not cover diffusion models which seem to be very relevant to the proposed method, especially in its application to density estimation. Specifically, there’s a relevant work [1] studying diffusion models on Riemannian manifolds which would be worth to compare with or at least to mention as a competitive work and highlight the key differences.\n\n### Quality\n\nThe submission is technically sound and complete with a solid theoretical basis and good enough experiments. Authors’s claims are mostly supported by experimental results and most of the baseline methods are appropriate. \n\n### Clarity\n\nThe presentation of the method is very decent, authors introduce all the important concepts and provide preliminaries required for understanding the method. They also support some of the involved concepts and derivations with intuitive explanations which are very helpful for a broader machine learning audience. Moreover, informative illustrations helps a lot to make sense of the method and experiments. However, a solid background in manifold theory is required to fully understand the method and its theoretical basis. \n\n### Significance\n\nThe proposed method addresses an important task of modeling SDEs on manifolds in an original way surpassing some of the existing methods. The literature review and experimental comparison suggests that the proposed method is fairly significant.\n\n[1] De Bortoli, Valentin, et al. \"Riemannian score-based generative modeling.\" arXiv preprint arXiv:2202.02763\n (2022). 1. It would be worth to consider diffusion models in related works, especially [1]\n2. In the experiments with the Vessel route dataset, you choose Latent ODE and ODE RNN as the baselines. As far as I know the main feature of these models is that they can deal with irregular time series, i.e. where measurements are not equidistant wrt time. Does RNSDE capable to take this irregularity into account? If not, this comparison looks like comparing apples with oranges, where it is hard to make any conclusion why one model outperforms the others. I would highly recommend authors to add a comment on this in the text of the paper.\n3. In the introduction section there’s a strong claim that conventional approaches which do not take into account geometry of the task are losing in their expressivity. However, there’s a little of validation for that in the experiments section. I would recommend to add this kind of conventional approaches as baselines to all the experiments to give a clear evidence of the importance of taking into account geometry of the data. Authors clearly state that they focus only on Euclidian manifolds while there’re other domains (non-compact Lie groups, product spheres) important for downstream applications to consider. There’re no discussion of the negative social impact provided. " ]
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 3, 4, 2 ]
[ "nips_2022_1ryTomA0iKa", "20RnQXosm8P", "7cpUOuIXL8M", "8Ivqt_XIt7", "nips_2022_1ryTomA0iKa", "nips_2022_1ryTomA0iKa", "nips_2022_1ryTomA0iKa" ]
nips_2022_C9yUwd72yy
Learning Latent Seasonal-Trend Representations for Time Series Forecasting
Forecasting complex time series is ubiquitous and vital in a range of applications but challenging. Recent advances endeavor to achieve progress by incorporating various deep learning techniques (e.g., RNN and Transformer) into sequential models. However, clear patterns are still hard to extract since time series are often composed of several intricately entangled components. Motivated by the success of disentangled variational autoencoder in computer vision and classical time series decomposition, we plan to infer a couple of representations that depict seasonal and trend components of time series. To achieve this goal, we propose LaST, which, based on variational inference, aims to disentangle the seasonal-trend representations in the latent space. Furthermore, LaST supervises and disassociates representations from the perspectives of themselves and input reconstruction, and introduces a series of auxiliary objectives. Extensive experiments prove that LaST achieves state-of-the-art performance on time series forecasting task against the most advanced representation learning and end-to-end forecasting models. For reproducibility, our implementation is publicly available on Github.
Accept
The paper presents a novel learning approach named LaST for time-series forecasting based on variational inference to disentangle the seasonal-trend representations in the latent space. Empirical results validate the effectiveness of the proposed method in comparison with several strong baselines. Reviewers generally agree the work is technically solid, the idea is novel, the experiments are convincing, and the paper is well presented. Thus, the paper is clearly above the acceptance bar. Authors are encouraged to incorporate all the discussions and the additional results during the rebuttal in the final version.
train
[ "xgkBP6cE25T", "PmQZsEWYDre", "dX0-aFFUF6mf", "RdnCN-MIFDC", "XYynOTFts0F", "i2Ro0Cy9h_8", "qPy03NJWtxQ", "kXu2QFq97dK", "ynME5FnsLJe", "sRmTTlhqtKA", "r6OxGF_cWAz", "ovVwsDvcg6", "6u_gBx4gZKF", "zO7QCIwaOvP", "FqjOEGdKLx7" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nThe authors have provided the rebuttal responses. The discussion period between authors and reviewers will end soon. \nPlease do check the author's response, acknowledge your reading, and update your review if needed. \nIf there is any further question, please do ask the authors to clarify before the discussion period ends. \n\nThank you for your professional service! \n\nYour AC\n", " Thanks for your further response. I have raised my score from 5 to 6.", " Thanks for your response and appreciation. Here are the responses to your remaining concerns.\n\n---\n\n**Q1:** I think the convergence property can be further analyzed.\n\n**A1:** Thanks for the good suggestions. As per your request, we have conducted additional experiments to validate the convergence of our model and put the results into our supplementary materials (see “convergence.pdf”). Figure 1 shows the descending process of training, valid, and test loss of our model as the epochs increase, where we can see that all losses first drop and then goes to level off. The downward trend is not obvious on ETTm1 dataset. This is because ETTm1 is a larger-scale dataset and it requires over 1,000 times stochastic gradient descent at every epoch. We visualize the losses change at the first epoch in figure 2. We can observe that losses drop a lot at the first epoch. These results demonstrate that our model converges well on real-world datasets. We will add these analyses in our final version.\n\n---\n\n**Q2:** The stochasticity of time series is neglected in the decomposition, which should also be noticed by the authors.\n\n**A2:** Thanks for the comment. In our paper, we have a simple assumption that time series can be decomposed into seasonal and trend parts. It is a classical decomposition strategy and achieves success in time series forecasting. However, as your mentioned, it does ignore the stochasticity which usually appears in real-world datasets. On the one hand, variational inference implicitly consists of Gaussian noise when establishing normal distribution, which simulates stochasticity to some extent, but the effect is limited. On the other hand, considering the stochasticity is also a challenging work in time series. Some outstanding papers such as [1] have made efforts to model it. In our future work, we may consider modeling stochasticity explicitly. We will clarify this point in our paper. Thanks for your suggestions again!\n\n[1] Sun, Fan-Keng, Chris Lang, and Duane Boning. \"Adjusting for autocorrelated errors in neural networks for time series.\" *Advances in Neural Information Processing Systems*\n 34 (2021): 29806-29819.\n\n---\n\n**Q3**: About the future work, maybe imputation shares some similarities with forecasting, but anomaly detection is totally a different area w.r.t. forecasting (Classification v.s. Regression). You may change this description.\n\n**A3:** Thanks for the helpful suggestions. Except for dealing with the time series, designing a good score function or classifier is of vital importance for an advanced anomaly detection model. We will point out the similarities and differences between these two tasks and organize the description carefully.", " Most of my concerns are addressed by the authors. I still have two concerns.\n\n- As stated in the original review, I also have questions about the \"limitations\". And I cannot find any discussions about this in the revised paper.\n- About the future work, maybe imputation shares some similarities with forecasting, but anomaly detection is totally a different area w.r.t. forecasting (Classification v.s. Regression). You may change this description.\n\nIn general, I appreciate this paper's contribution to time series forecasting. I am willing to raise the score if you address the above two questions.", " **Q5:** I am a little confused about the predictor (Figure 1 (b)) without the supplementary material. Especially in lines 127-128, the inverse DFT cannot change the sequence length. You should give more details about the “extend” in the main text.\n\n**A5**. Sorry for the confusion. Our predictor is composed of seasonal and trend parts. The trend part consists of two linear transformations with two feed forward networks. It can be formulated by:\n\n$$\n\\begin{align}\n\\\\tilde{Z}^t =FFN_1(Z^t),\\\\\\\\\nY^t=FFN_2(\\\\tilde{Z}^t),\n\\end{align}\n$$\n\nwhere dimensions transformations are $FFN_1:T\\rightarrow \\tau$ and $FFN_2: d_Z \\rightarrow d$. Here $d_Z$ and $d$ denote dimensions of representations and time series, respectively. The seasonal part first exploits the discrete Fourier transform (DFT) and inverse DFT (iDFT), which, as Appendix B.1 shows, can be formulated by:\n\n\n\n$$\n\\begin{align}\n&Z_{\\mathcal{F},k}^s = \\mathcal{F}(Z^s)_k = \\sum _{t=1}^T \\cdot \\exp(\\frac{-2\\pi i k t}{T}),\\qquad 1\\leq k\\leq \\left \\lfloor \\frac{T+1}{2} \\right \\rfloor \\\\\\\\\n& Z _{t}^s = \\mathcal{F}^{-1}(Z _{\\mathcal{F}^s}) _{t} = \\frac{1}{T} \\sum _{k=1}^{T}Z _{\\mathcal{F},k}^s\\cdot \\exp(\\frac{2\\pi i kt}{T}), \\qquad 1 \\leq t \\leq T + \\tau.\n\\end{align}\n$$\n\nWith these two formulas, sequence length has been extended, and thus we can derive predictable representations $\\tilde{Z}^s$ by taking the last $\\tau$ time steps. Then we transform it into seasonal forecastings:\n\n$$\nY^s=FFN(\\tilde{Z}^s),\n$$\n\nwhere, same as the trend predictor, $FFN: d_Z \\rightarrow d$. Finally, we obtain the forecasting by adding these two parts $Y=Y^s+Y^t$. We will provide more details in the main text. Thank you for this great suggestion!\n\n------------------\n\n**Q6:** It seems the prediction results show the over-smoothing problem (Figure 3). This problem is fatal for details predicting of time series. This can be caused by the design of the predictor. More showcases are required especially the non-stationary time series. \n\n**A6**. Thanks for this insightful comment. As per your request, we have added some non-stationary time series in supplementary materials (see “non_stationary_cases.pdf”) to further validate the capability of our model. It shows that facing non-stationary time series, the reconstructed seasonal and trend still jointly restore the time series with their own characteristics. We’d be happy to extend these cases in our final version.", " **Q2:** Comparing baselines: (1) I think the N-BEATS is a necessary baseline since you both adopt the decomposition and a similar predictor for forecasting (linear model for trend and sine/cosine functions for seasonal). (2) The most important thing of this paper is to explain the necessity of learning ‘latent’ representations. What about using the moving average of Autoformer for decomposition and adopting the same predictor for prediction? I think this vanilla baseline is also necessary.\n\n**A2**: Thanks for this great comment and we agree that these two baselines are necessary. \n\nN-BEATS is a univariate time series forecasting model. Thus, we provide comparisons between our model and N-BEATS on the univariate setting. The results are shown in the following two tables:\n\nTable 1: Univariate forecasting comparisons on ETTh1 dataset (metric: MSE/MAE).\n\n| Model | 24 | 48 | 168 | 336 | 720 |\n| --- | --- | --- | --- | --- | --- |\n| N-BEATS | 0.042/0.156 | 0.065/0.200 | 0.106/0.255 | 0.127/0.284 | 0.269/0.422 |\n| LaST | 0.030/0.131 | 0.051/0.169 | 0.078/0.211 | 0.100/0.246 | 0.138/0.298 |\n\nTable 2: Univariate forecasting comparisons on ETTm1 dataset (metric: MSE/MAE).\n\n| Model | 24 | 48 | 96 | 228 | 672 |\n| --- | --- | --- | --- | --- | --- |\n| N-BEATS | 0.031/0.117 | 0.056/0.168 | 0.095/0.234 | 0.157/0.311 | 0.207/0.370 |\n| LaST | 0.011/0.077 | 0.021/0.108 | 0.033/0.134 | 0.069/0.197 | 0.100/0.239 |\n\nThe results indicate that LaST performs much better than N-BEATS on the univariate time series forecasting task. \n\nAs per your request, we keep the predictor identical and further compare our decomposition approaches with that in Autoformer, i.e., moving average. The experimental results are as follows:\n\nTable 3: Multivariate forecasting comparison with different decomposition approaches on ETTh1 dataset (metric: MSE/MAE).\n\n| Approach | 24 | 48 | 168 | 336 | 720 |\n| --- | --- | --- | --- | --- | --- |\n| Moving Average | 0.364/0.398 | 0.384/0.406 | 0.556/0.516 | 0.690/0.598 | 0.976/0.777 |\n| Ours | 0.324/0.368 | 0.351/0.380 | 0.468/0.453 | 0.566/0.512 | 0.740/0.650 |\n\nTable 4: Multivariate forecasting comparison with different decomposition methods on ETTm1 dataset (metric: MSE/MAE).\n\n| Approach | 24 | 48 | 96 | 288 | 672 |\n| --- | --- | --- | --- | --- | --- |\n| Moving Average | 0.228/0.293 | 0.300/0.344 | 0.345/0.379 | 0.394/0.407 | 0.503/0.476 |\n| Ours | 0.218/0.289 | 0.280/0.329 | 0.323/0.360 | 0.392/0.403 | 0.491/0.466 |\n\nWe can see that our approach outperforms Moving Average. We will include the above baselines and corresponding results in our main paper. \n\n---------------\n\n**Q3:** What is the difference between the autocorrelation calculation used in Equ 6 and the autocorrelation calculation method in Autoformer? Maybe giving a citation is better.\n\n**A3:** Thanks for this suggestion. Our autocorrelation calculation, deriving from [1], is the same as that in Autoformer. Compared to direct calculation in temporal domain, it achieves the autocorrelation sequence in frequency domain and holds efficiency. The formulas for this calculation have been provided in Appendix B.2. We will add this citation in our final version.\n\n[1] George EP Box, Gwilym M. Jenkins, Gregory C. Reinsel, and Greta M. Ljung. Time series analysis: forecasting and control. John Wiley & Sons, 2015. \n\n\n----------\n\n**Q4:** The visualization in Figure 2 is not surprising. You can conduct the same visualization to Autoformer. I think the simple moving average block can achieve the representation disentanglement well. You can compare these two decomposition methods. \n\n**A4:** Thanks for this good comment. We conducted this visualization to evaluate whether our mechanism can make seasonal-trend representations disentangled in latent space. Results (cf. figure 2) in the main paper support the success. As per your suggestion, we visualize the seasonal and trend representations in the decoder in supplementary materials (see “repre_visual.pdf”). The results show that their representations are muddled and cannot be distinguished well. This is because the decomposition mechanism of Autoformer paid attention to time series itself rather than representations. This visualization comparison suggests that: (1) learning disentangled seasonal-trend representations is not trivial, and (2) our proposed decomposition methods are effective. We will add these results and more discussions to the paper.", " Many Thanks to Reviewer JNkP for the thorough review and valuable comments. The responses to your concerns are as below.\n\n--------------\n\n**Q1:** The 'proof' of the 'Theorem' 2 is too intuitive. (1) The minimization between autocorrelations of $\\hat{X}$ and $\\hat{X}^s$ is not convincing, which will bring the noise to the optimization. And this process also relies on an underlying assumption, that the raw time series is periodic by removing the trend, which is contradictory to the statement in line 89. (2) The same problem is also in the design of CORT.\n\n**A1**: Thanks for this great comment. We first respond to the concern about autocorrelations and CORT metrics. Starting from the design motivation, we require two evaluation standards for seasonal and trend characteristics as constraints that make two representations hold their corresponding semantics and further avoid entanglement. Thus, we choose autocorrelations to reflect the seasonal part. This metric denotes the similarity between every instant value in $X$ and the corresponding lagged value and is also used to extract seasonal features in many works [1], [2]. CORT focuses on the first differences between two input time series, which can further reflect their trend similarity. We agree that these two metrics are not strong enough to describe a time series thoroughly. In other words, we cannot determine a unique sequence convincingly only with autocorrelations and CORT. This is also the reason why we introduce an additional (MSE) loss, to ensure that the reconstruction is the same as the original signal. However, the purpose of using these two metrics is only to guide the optimization directions of seasonal and trend representations by constraining reconstructions $\\hat{X}^t$ and $\\hat{X}^s$. Ablation study results (cf. Table 3 in page 8) also support the observation that these two metrics can help enhance the forecasting performance. A formal treatment of the accidentally introduced noise is indeed an important aspect and they are interesting challenges for us to address in future work (e.g., how to detect and quantify the noises if any). we will add an explicit note to that effect in the final version, stating that we plan on investigating it. \n\nRegarding the statement in line 89, we believe that there is no contradiction. Similar to our proposed LaST, Autoformer also exploits the decomposition strategy for better time series forecasting. Our LaST and Autoformer both obey the same assumption that time series can be formed as the sum of seasonal and trend. The difference is that Autoformer decomposes seasonal and trend by a simple moving average pooling module while ours employ disentangled latent representations. Besides, LaST does not rely on a specific fixed window and thus is more flexible with input time series. Thanks again for your valuable comment. We will improve our description in the final version.\n\n[1] Michail Vlachos, Philip Yu, and Vittorio Castelli. On periodicity detection and structural periodic similarity. *SDM,* 2005, 449–460.\n\n[2] Michail Vlachos, Philip Yu, and Vittorio Castelli. A periodogram-based metric for time series classification. *Computational Statistics & Data Analysis*, 50:2668-2684, 2006.", " Many Thanks to Reviewer GaRG for the thorough review and valuable comments. The responses to your concerns are as below.\n\n-------\n\n**Q1:** Does these trend data Xt and seasonal data Xs are extracted from X? If so, how do you extract them?\n\n**A1:** Thanks for this constructive comment. Trend data $X^t$ and seasonal data $X^s$ cannot be extracted directly from the $X$. Thus, we use reconstruction $\\hat{X}^t$ and $\\hat{X}^s$ as substitutes in Eq. (6). Seasonal $\\hat{X}^s$ and trend $\\hat{X}^t$ are reconstructed by the encoding and decoding process of variational inference. Specifically, as algorithm E1 in Appendix E.2 shows, an input time series $X$ is fed into the seasonal encoder and trend encoder and is encoded as two disentangled representations, i.e., $Z^s$ and $Z^t$. Then, the decoders reconstruct them into $\\hat{X}^s$ and $\\hat{X}^t$ for further estimations and computations. Hope our explanation helps. We will explain the construction process more clearly in the main text.\n\n-------\n\n**Q2:** The generalizability of this LaST might be limited since it requires that the input data are a triplet: Time series data X, trend data Xt and seasonal data Xs. Many sequential datasets, especially the real-world ones, often don’t have the complete triplet data or with a lot of missing/noising data. So its generalizability should be further investigated.\n\n**A2:** Thank you for this remark. As we just mentioned, our proposed LaST does not require the trend and seasonal data as input. We reconstruct them from the representation $Z$ from the input time series $X$. Thus, most real-world datasets will meet the condition required by our model. Besides, data missing problem is a critical challenge in the time series data modeling. We’d like to pay more attention of this problem in our future work.\n\n-------\n\n**Q3:** The author might need to add more baselines (other disentangled representation learning methods for sequential data). Can you compare your results with other disentangled representation learning methods for sequential data?\n\n**A3:** Of course! We add a recently proposed sequential disentanglement representation learning model C-DSVAE [1] as a new baseline and compare it with our model. In fact, one of our existing baselines, CoST, is also a disentangled representation learning-based model which employs contrastive learning in frequency and temporal domain to obtain the disentangled representations. In the following two tables, we compare our model with C-DSVAE and CoST on ETTh1 and ETTm1 datasets:\n\nTable 1: Multivariate forecasting comparison with disentangled methods on ETTh1 dataset (metric: MSE/MAE).\n\n| Approach | 24 | 48 | 168 | 336 | 720 |\n| --- | --- | --- | --- | --- | --- |\n| C-DSVAE | 0.428/0.438 | 0.487/0.462 | 0.621/0.590 | 0.735/0.628 | 0.990/0.781 |\n| CoST | 0.389/0.429 | 0.437/0.464 | 0.643/0.582 | 0.812/0.679 | 0.812/0.679 |\n| LaST | 0.324/0.368 | 0.351/0.380 | 0.468/0.453 | 0.566/0.512 | 0.740/0.650 |\n\nTable 2: Multivariate forecasting comparison with disentangled methods on ETTm1 dataset (metric: MSE/MAE).\n\n| Approach | 24 | 48 | 96 | 288 | 672 |\n| --- | --- | --- | --- | --- | --- |\n| C-DSVAE | 0.429/0.423 | 0.576/0.510 | 0.584/0.513 | 0.600/0.535 | 0.618/0.547 |\n| CoST | 0.246/0.329 | 0.331/0.386 | 0.378/0.419 | 0.472/0.486 | 0.620/0.574 |\n| LaST | 0.218/0.289 | 0.280/0.329 | 0.323/0.360 | 0.392/0.403 | 0.491/0.466 |\n\nFrom the tables, we can observe LaST achieves the best performance in all forecasting horizons, demonstrating the effectiveness of our proposed disentanglement learning scheme. We will add the new results in the main text. \n\n[1] Junwen Bai, Weiran Wang, and Carla P. Gomes. Contrastively disentangled sequential variational autoencoder. NeurIPS, 2021, 10105-10118.", " **Q3:** When discussing the connection to previous papers, the writing should both include the difference and the inheritance. Which components are extended from the previous studies? Also, what are techniques that can potentially be combined (such as contrastive learning from CoST)?\n\n**A3:** Thanks for the helpful comment. As the reviewer has pointed out, CoST exploits the contrastive learning method to encourage the discriminative of seasonal and trend representations. Autoformer employs a moving average pooling module to decompose time series directly. Different from these two approaches, LaST designs two seasonal and trend metrics and minimize the mutual information to further disentangle the seasonal and trend representations. \n\nBesides, we are inspired by CoST to design the seasonal predictor with the discrete Fourier transform (DFT) mechanism. Autoformer and our LaST both use Autocorrelation to reflect the seasonal patterns of time series. However, Autoformer designs it as a kind of score to establish the correlations between time steps, while LaST employs it to measure the seasonal reconstruction. We will add these comparisons in our final version to assist readers better understand the background and connections of our model with previous approaches. Thank you for this great suggestion.\n\n-------\n\n**Q4:** What is the training and inference time? How it can be compared with the baselines?\n\n**A4:** Thanks for this comment. We have conducted experiments to compare the running time with advanced time series models. We keep the batch size identical and record the time consumption of an epoch. The results are concluded as follows: \n\nTable 1: Training and inference time consumption comparisons on ETTh1 datasets in different forecasting horizons. \n\n| Method | 24 | 168 | 720 |\n| --- | --- | --- | --- |\n| CoST | 8.23 s | 9.40 s | 13.13 s |\n| LaST | 14.88 s | 18.75 s | 24.53 s |\n| Autoformer | 19.41 s | 35.44 s | 83.57 s |\n\nAs the table shows, Autoformer needs more running time especially in large horizons, since the complexity of its attention mechanism is squared with forecasting horizons, while the complexities of LaST and CoST are linear with that. Besides, the training process of our model on ETTh1 dataset only requires about 20 epochs to converge. Thus, the complete training at a specific forecasting horizon only takes 10-minutes of consumption. We will add these results to supplementary materials in our final version. Thanks for your suggestions again!", " Many thanks to Reviewer r48g for the thorough review and valuable comments. The responses to your concerns are as below.\n\n-------\n\n**Q1:** One weakness, as also noticed by the author, is the potential degeneration of features caused by narrowing down prior and posterior, and the effect is not discussed in detail. The drawback of feature degeneration should be discussed in more detail.\n\n**A1:** Thanks for the insightful comment. We agree that potential feature degeneration exists when narrowing down the prior and posterior. This is because the widely used variational inference [1] maximizes the likelihood $P(X,Y)$ with evidence lower bound (ELBO) term, which minimizes the KL divergence between the prior $P(Z)$ and $Q(Z|X)$. Theoretically, the $Q(Z|X)$ will be non-informative for the inputs $X$ if the capacity of neural networks is strong enough. This situation leads to useless forecastings for the input time series. However, we cannot simply remove this term: we should ensure it is still an effective lower bound. \n\nThus, to tackle this feature degeneration problem, we introduced the mutual information (MI) between observation $X$ and latent representation $Z$ and try to maximize this term to maintain the correlations between $X$ and $Z$. The detailed maximization method and technique are shown in Sect. 4. \n\nBesides, the experimental results of “*w/o lower bound”* in ablation study (cf. Table 3 in page 8) can be seen as negative effects of feature degeneration. Without the mutual information constraint, the performance drops ~4% on average, and in some metrics (e.g., MSE with horizon 168 on ETTh1 dataset), this variant drops ~10%. This result, in a sense, additionally supports the claim of effectiveness of our solution. We will provide more detailed discussions of feature degeneration.\n\n[1] Diederik P. Kingma and Max Welling. Auto-encoding Variational Bayes. ICLR, 2014.\n\n-------\n\n**Q2:** The development of Theorem 2 is a bit confusing. The authors first claim that L rec can be estimated without leveraging $X^s$ and $X^t$, while eq 6 still has the two variables. Line 143, what is “first difference?”\n\n**A2:** Thank you for pointing this out, and we wish to apologize for the confusion. Since the seasonal $X^S$ and trend $X^t$ cannot be obtained as input, we use $\\hat{X}^s$ and $\\hat{X}^t$ to replace them in Eq. (6) and in Line 143, where the $\\hat{\\cdot}$ mark denotes the corresponding reconstructions from the variational inference. We will discriminate between these two marks more clearly in the final version. Besides, $\\Delta X^t$ should be $\\Delta X$ in Eq. (7). Thanks for the careful review.\n\nThe first difference denotes the variation during the time series. In a time series $X_{1:T}$ with length $T$, it is also a series and can be obtained by $\\\\{X_{t+1}-X_{t} \\\\} _{t\\in [1:T-1]}$. Clearly, this reflects the trends of time series, and we will provide more detailed explanation in the final version.\n", " Many thanks to Reviewer hjLn for the thorough review and valuable comments. The responses to your concerns are as below.\n\n-------\n\n**Q1:** Superposition of a lot of complex components to make it work, possibly creating uncertainty on the robustness of the model.\n\n**A1:** Thank you for raising this concern. We agree that in our model, there are some complex components like the Fourier transform and its inverse transform in our predictor. We would like to mention that most of the improvements we proposed, including the autocorrelation and mutual information terms, are only involved in the training phase as optimization targets. Once the model is trained, these components are not included in the time series forecasting phase. We also provide ablation analysis for each component in Table 3 in the main context to validate their improvement.\n\nFor the robustness of our model, we have provided the standard deviations in Table F3 (Fluctuation analysis) in Appendix F. We run baselines and our model three times on all datasets to observe the performance fluctuations. The table shows that performance changes are within 3% for most datasets. Even on the small-size dataset “exchange rate”, the error records are also within 10%. These fluctuation ranges are much smaller than the enhancement our model achieves. Thus we believe our model is as robust as baselines. \n\n-------\n\n**Q2:** How complex was performing this training and getting it to work? To me, it seemed like a daunting task.\n\n**A2:** Thanks for this comment. The computational complexity is actually a very important property for evaluating the overall quality of the model. We have analyzed the complexity of the training process in Appendix E.1. We have conducted additional experiments and measured the training time and the number of epochs that the model needs for converging. The results are shown in Table 1 below, where we can see that our proposed LaST is efficient and can reach convergence within 10 minutes. We’d be happy to add it to the supplementary materials and indicate it in the main text.\n\nTable 1: Training and inference time consumption and the number of epochs on the ETTh1 dataset in different forecasting horizons. \n\n| Horizon | 24 | 168 | 720 |\n| --- | --- | --- | --- |\n| Epoch time | 14.88 s | 18.75 s | 24.53 s |\n| # Epochs | 26 | 16 | 17 |", " The paper presents an approach for learning latent season-trend based representations for time-series forecasting, based on variational inference. A sesonal encoder and a trend encoder are used to obtain latent representations. The latent representations are then combined together to produce a forecast. The forecast is done via the predictor module, which makes use of Fourier Transform for the seaosnal representation and a simple MLP for a the trend representation, and then sums them up to obtain the forecast. Reconstruction error cannot be minimized directly, as the true trend and season components are not known, as a result autocorrelation distance is used for seaosnality reconstruction, and the temporal correlation is used for reconstructing the trend reconstruction. For the purpose of disentanglement, the MI between the latent components is minimized. The overall loss function has an ELBO loss, and MI maximization between the original time-series and latent component, and MI minimization between the season and trend latent components. Since the optimization is untractable, lower and upper bounds for MI are derived to help with the optimization. Experiments are performed on standard univariate and multivariate timeseries benchmarks, showing LaST achieves state of the art performance. Visualizations show effective disentanglement. Strengths\n- good motivation to expand on decomposition of trend seasonality\n- novel way to do so compared to existing approaches (e.g. variational inference vs CoST using contrastive learning)\n- meaningful loss function (elbo, reconstruction, predictor)\n- identified technical challenges in reconstruction, and proposed a way to address it\n- identified challenges in MI optimisation and developed bounds to address it \n- good coverage and comprehensive experiments - baselines + datasets\n- good ablation / visualizations\n- a generally comprehensive paper which is well written\n\nWeakness\n- superposition of a lot of complex components to make it work, possibly creating uncertainty on the robustness of the model - how complex was performing this training and getting it to work? To me, it seemed like a daunting task NA", " In this submission, the authors propose to learn disentangled seasonal and trend representations of seasonal time series. In particular, in contrast to previous methods that use average pooling for the trend feature, the major contribution here is to directly optimize the two representations by variational method. Also, a detailed analysis of the loss function and the optimization bound is provided. Empirical results show strong effectiveness and outperforms several strong baselines. The following analysis also supports the motivation. Originality: \nThe proposed method of representation disentanglement is an extension of other fields such as computer vision. However, given the nature of seasonal time series, the application of this method is non-trivial, as well as the design of a proper predictor. The bound proof is original in this setting.\n\nQuality: \n1. The conduct of empirical analysis is of high quality. The experiments are sufficient to support the claims (great performance boost). One weakness, as also noticed by the author, is the potential degeneration of features caused by narrowing down prior and posterior, and the effect is not discussed in detail. \n2. As the reviewer is not an expert in theoretical analysis, I would leave this part to other reviewers.\n\nClarity:\n1. The overall presentation is clear, including a good schematic pipeline description and results tables.\n2. The development of Theorem 2 is a bit confusing. The authors first claim that L rec can be estimated without leveraging Xs and Xt, while eq 6 still has the two variables. Line 143, what is \"first difference?\"\n\nSignificance:\nThe reviewer thinks this method provides significant advances in this subfield.\n 1. When discussing the connection to previous papers, the writing should both include the difference and the inheritance. Which components are extended from the previous studies? Also, what are techniques that can potentially be combined (such as contrastive learning from CoST)?\n\n2. What is the boundary of the method? The drawback of feature degeneration should be discussed in more detail.\n\n3. What is the training and inference time? How it can be compared with the baselines? As discussed above, the reviewer encourages the authors to face the general limitation of VI methods and discuss the implications in this setting.", " This paper focuses on the time series forecasting task and presents a decomposition method LaST to learn latent representations. The decomposition method is derived from variational inference and mutual information. Along with a predictor, LaST can achieve competitive performance in many benchmarks. ### Strengths\n1. The loss function for time series decomposition is reasonable and novel.\n\n2. This paper is well-organized and clear.\n\n3. The model performance is competitive and is with detailed analysis.\n\n### Weaknesses\n1. The ‘proof’ of the ‘Theorem’ 2 is too intuitive.\n- The minimization between autocorrelations of $X$ and $\\hat{X}^s$ is not convincing, which will bring the noise to the optimization. And this process also relies on an underlying assumption, that the raw time series is periodic by removing the trend, which is contradictory to the statement in line 89.\n- The same problem is also in the design of CORT($X$,$\\hat{X}^t$).\n2. Comparing baselines:\n- I think the N-BEATS is a necessary baseline since you both adopt the decomposition and a similar predictor for forecasting (linear model for trend and sine/cosine functions for seasonal).\n- The most important thing of this paper is to explain the necessity of learning ‘latent’ representations. What about using the moving average of Autoformer for decomposition and adopting the same predictor for prediction? I think this vanilla baseline is also necessary.\n 1. What is the difference between the autocorrelation calculation used in Equ 6 and the autocorrelation calculation method in Autoformer? Maybe giving a citation is better.\n2. The visualization in Figure 2 is not surprising. You can conduct the same visualization to Autoformer. I think the simple moving average block can achieve the representation disentanglement well. You can compare these two decomposition methods.\n3. I am a little confused about the predictor (Figure 1 (b)) without the supplementary material. Especially in lines 127-128, the inverse DFT cannot change the sequence length. You should give more details about the “extend” in the main text.\n4. It seems the prediction results show the over-smoothing problem (Figure 3). This problem is fatal for details predicting of time series. This can be caused by the design of the predictor. More showcases are required especially the non-stationary time series.\n No, the author has not discussed any limitations of this work. I think the convergence property can be further analyzed. Also, the stochasticity of time series is neglected in the decomposition, which should also be noticed by the authors.", " The author proposed VAE-based method, LaST, to disentangle the seasonal-trend representations of sequential data. The LaST uses the trend and seasonal data as the reconstructed targets, which enforce the model to learn representations dedicated to its target hence achieving better disentanglement. Finally, a predictor is introduced to guarantee its performance on the downstream tasks. Results show that LaST can achieve better predicting performance compared to other forecasting baselines. Pros:\n1. The usage of trend and seasonal input Xt and Xs force the model to learn dedicated representations and make us easier to evaluate the disentanglement.\n2. The author provides rigorous mathematical proof, including the decomposition of ELBO, and the lower and upper bounds for MI optimization.\n3. The author provides extensive results on both the prediction performance for downstream tasks and the disentanglement of the trend and seasonal features.\n\nCons:\n1. The author stated that \"existing approaches with a single high-dimensional representation sacrifice the information utilization and explainability\". Models like CoST uses different modules to extract trend/seasonal dependencies. The author fails to address the difference between those models that do not use a high-dimensional representation.\n\n2. Other disentangled representation learning methods for sequential data are not referenced as baselines i 1. Does these trend data Xt and seasonal data Xs are extracted from X? If so, how do you extract them?\n2. Can you compare your results with other disentangled representation learning methods for sequential data?\n 1. The generalisability of this LaST might be limited since it requires that the input data are a triplet: Time series data X, trend data Xt and seasonal data Xs. Many sequential datasets, especially the real-world ones, often don't have the complete triplet data or with a lot of missing/noising data. So its generalisability should be further investigated.\n\n2. The author might need to add more baselines (other disentangled representation learning methods for sequential data)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 4 ]
[ "nips_2022_C9yUwd72yy", "dX0-aFFUF6mf", "RdnCN-MIFDC", "zO7QCIwaOvP", "zO7QCIwaOvP", "zO7QCIwaOvP", "zO7QCIwaOvP", "FqjOEGdKLx7", "6u_gBx4gZKF", "6u_gBx4gZKF", "ovVwsDvcg6", "nips_2022_C9yUwd72yy", "nips_2022_C9yUwd72yy", "nips_2022_C9yUwd72yy", "nips_2022_C9yUwd72yy" ]
nips_2022_tbId-oAOZo
QueryPose: Sparse Multi-Person Pose Regression via Spatial-Aware Part-Level Query
We propose a sparse end-to-end multi-person pose regression framework, termed QueryPose, which can directly predict multi-person keypoint sequences from the input image. The existing end-to-end methods rely on dense representations to preserve the spatial detail and structure for precise keypoint localization. However, the dense paradigm introduces complex and redundant post-processes during inference. In our framework, each human instance is encoded by several learnable spatial-aware part-level queries associated with an instance-level query. First, we propose the Spatial Part Embedding Generation Module (SPEGM) that considers the local spatial attention mechanism to generate several spatial-sensitive part embeddings, which contain spatial details and structural information for enhancing the part-level queries. Second, we introduce the Selective Iteration Module (SIM) to adaptively update the sparse part-level queries via the generated spatial-sensitive part embeddings stage-by-stage. Based on the two proposed modules, the part-level queries are able to fully encode the spatial details and structural information for precise keypoint regression. With the bipartite matching, QueryPose avoids the hand-designed post-processes. Without bells and whistles, QueryPose surpasses the existing dense end-to-end methods with 73.6 AP on MS COCO mini-val set and 72.7 AP on CrowdPose test set. Code is available at https://github.com/buptxyb666/QueryPose.
Accept
The authors propose a novel framework for end-to-end multi-person pose estimation by employing a set of learnable part-level queries along with instance-level queries. Promising results are demonstrated on the challenging COCO and CrowdPose datasets. The provided author rebuttal successfully addressed all reviewer concerns. As a result, all four reviewers recommend accepting the papers. The AC has read the paper, reviewer comments, author rebuttal, and all the discussions. The AC agrees with the reviewer recommendations. The authors are encouraged to include the rebuttal results (e.g., runtime analysis) to their camera-ready.
train
[ "AS-SsXbmuMj", "snUv03txNV8", "CUFsG2vv6D", "sQ_XoyN1OhQ", "E5WAAOvoht6", "W3WzaiZnt6_", "GR9_8qWy5gb", "jTh-x0JioAD", "i6tYPys7Rvp", "fQuPBOB5EZ5", "8exCq9F5UO", "9Zm4EN68RJ-", "9ljvollRits", "ymKHlXYAZAg" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the previous insightful comments. We also would like to receive your further response about our clarifications.", " We are glad to hear that the concerns have been addressed. Thanks again for the time and effort in reviewing our paper. The constructive suggestions help us make our paper better.", " Thank you for the previous insightful suggestions. We also would like to hear your further comments. If you have any questions, please let us know.", " Thanks. I have read the responses and all my concerns have been addressed.", " Dear Reviewers,\n\nThis is a friendly reminder that there are only a few days left for the author-reviewer discussion (due August 9th).\n\nPlease properly exploit the chance to engage in the discussion with the authors.\n\nEven a few short sentences to acknowledge that you have read the rebuttals and other reviewer comments are appreciated.\n\nFinally, the area chair thanks the reviewer f8E1 for acknowledging all their comments are addressed by the authors.\n\nThanks,\n", " Dear Reviewers,\n\nFirst of all, thank you for your service for the NeurIPS 2022 review.\n\nThe author-reviewer discussion period has just begun (from August 2nd to August 9th).\n\nPlease kindly take a look at all other reviews and the author responses (if any).\n\nIf you have any concerns that require more clarification from the authors, now it is the time to engage the discussion with the authors.\n\nSince the discussion period is only one week, please properly exploit the chance.\n\nThanks,\n", " Thank you for the careful reviews and constructive suggestions. We will clarify the questions as follows.\n\n**Q1: The proposed method is quite complex. There are several modules involved.**\n\n**A1**: Thanks for your concern. The whole framework can be divided into three parts, including backbone, box decoder and keypoint decoder. The keypoint decoder consists of the proposed Spatial Part Embedding Generation Module(SPEGM) and Selective Information Module(SIM). SPEGM takes ROI feature as inputs, and leverages the local spatial attention to generate spatial-sensitive part embeddings for preserving local spatial information. SIM uses the spatial-sensitive part embeddings to adaptively update the part query. We concentrate on the keypoint decoder and build a sparse end-to-end MPPE pipeline via the sparse part queries.\n\n**Q2: No runtime information is provided. It is hard to tell how fast/slow the method is. Both training and inference times need to be provided in the updated manuscript.**\n\n**A2**: Thanks for your comments. We report the training and inference time of QueryPose with different backbone.\nBackbone | AP| Training Time [h]| Inference Time [ms] \n--|--|--|--\nResNet50| 68.7| 42h| 70\nHRNet-W32|72.4|58h| 100\nHRNet-W48|73.6| 62h| 105\nSwin-L | 73.3| 67h| 117\n\nCompared with the representative dense competitors, we observe QueryPose is training-efficient with fewer training epochs . Furthermore, QueryPose eliminates time-consuming post-processes and achieves the faster speed. The detailed comparisons are reported as follows:\n\nMethods | AP| Training Epoch| Training Time [h]| Inference Time [ms] \n--|--|--|--|--\nHrHRNet-W48 [16]| 69.8 |300 | >140h | 317\nSWAHR-W48 [39]|70.8| 300 | >140h| 339\nDEKR-W48 [24] | 71.0|140 | >80h| 284\nAdaptivePose-W48 [20]| 70.0|280 | 140h| 110\nLQR-W48 [40]| 72.4| 170 | 100h| 297\nQueryPose-W48| 73.6| 68 | 62h| 105\n\nWe will provide the training and inference times in the revision.\n\n**Q3: Some minor typos.**\n\n**A3**: Thanks, we revise our paper carefully in the revision.\n\n**Q4: The proposed system is quite complex and hard to reproduce given the information provided in the paper or supplementary material. However sharing the code as the authors intend to do based on the abstract will be sufficient to address this point.**\n\n**A4**: Thanks. We will release the code for the convenience of reproducing our method.", " Thank you for the careful reviews and constructive suggestions. We clarify the questions as follows.\n\n**Q1: The proposed method is not a typical “one-stage” approach. One major drawback of the two-stage top-down approaches is that their computational cost increases with the number of people. But the proposed method (QueryPose) also suffers from such problem.**\n\n**A1**: Thanks for your concern. In this paper, we classify multi-person pose estimation (MPPE) methods from the perspective of two-stage and end-to-end optimization. We argue QueryPose is a sparse end-to-end MPPE framework, which can directly output multi-person keypoint sequences in parallel without non-differentiable post-processes. \n\nTwo-stage top-down approaches use two independent models including a human detector and a single-person pose estimator. The raw image is cropped to several normalized images according to the detected person boxes as input of the single-person pose estimator. The input images increase with the number of people, thus computational costs are increased. QueryPose is a query-based method. As all known, the number of queries is set to a fixed and small number(e.g., 50 or 100) for both training and testing stages. All queries are calculated in parallel in box and keypoint decoders. Therefore, QueryPose does not suffer from such problem. \n\n**Q2: It first locates the bounding box and then estimates the human poses for each bounding box. Isn’t the whole pipeline similar to Mask-RCNN? So I think it is more like a top-down approach, its performance should be compared with other top-down approaches.**\n\n**A2**: To some extent, the pipelines of QueryPose and Mask RCNN are similar. Both of them take the multi-person image as input, utilize an end-to-end network to locate the bounding box and estimate the human pose. However, Mask RCNN leverages the dense anchors and heatmap to perform MPPE, thus we classify it as dense end-to-end method in Table 4.\n\nIn contrast to two-stage top-down approaches using two independent models to perform MPPE, QueryPose leverages an end-to-end network to directly predicts multi-person keypoint sequences from the raw image via sparse learnable queries. We list the comprehensive comparisons with existing dense/sparse end-to-end methods in Table 4. Furthermore, we also compared it with the two-stage methods in Tables 5 and 6. QueryPose not only outperforms all existing end-to-end methods but narrows the gap with the two-stage methods.\n\n**Q3: The total computational complexity and runtime performance are concerning, but not reported in the paper**\n\n**A3**: We measure the inference time and FLOPs of QueryPose with ResNet50, HRNet-W32, HRNet-W48 and Swin-L. \n| Backbone | AP | Time [ms] | FLOPs | \n|-------- |-------- |------ |------- |\n|ResNet50 | 68.7 | 70 | 153G | \n|HRNet-W32 | 72.4 | 100 | 237G | \n|HRNet-W48 | 73.6 | 105 | 334G | \n|Swin-L | 73.3 | 117 | 545G | \n\nCompared with the state-of-the-art dense end-to-end methods, QueryPose eliminates time-consuming post-processes and achieves the faster speed. The inference time of QueryPose-R50 is 70ms with 153G FLOPs, which is faster than Mask RCNN-R50 (114ms) with 238G FlOPs. With the same backbone, QueryPose outperforms the representative DEKR, HrHRNet, SWAHR, and LQR in terms of accuracy and speed. More comparisons of runtime performance are reported in the revision.\n| Methods | Backbone | AP| Time [ms] | \n|---|-- |--|---|\n|HrHRNet [16] | HRNet-W48 |69.8 |317 | \n| SWAHR [39] | HRNet-W48| 70.8 | 339 | \n|DEKR [24] | HRNet-W48|71.0 | 284| \n|LQR [40] | HRNet-W48|72.4| 297 | \n| QueryPose | HRNet-W48 | 73.6 | 105 | \n\n**Q4: The paper writing can be improved. Especially, Figure2 is not clear.**\n\n\n**A4**: Thanks, we will re-organize our paper writing and improve Figure 2 for clearer presentation.", " Thank you for the careful reviews and constructive suggestions. We clarify the questions as follows.\n\n**Q1:This method seems as though it must be fairly expensive, the authors report splitting a batchsize of 16 across 8 A100s, it would be helpful to see comparisons in terms of memory use and training and inference time compared to other methods.**\n\n**A1**: Thanks for your concern. Following the previous query-based methods (e.g., DETR, Deformable-DETR, and Sparse RCNN), the batch size of QueryPose is set to 16 without adjustment. Batch size 16 is enough to obtain superior results, while the state-of-the-art dense competitors (e.g., DEKR, SWAHR, AdaptivePose, LQR) often set batch size as 64 or even larger. For memory use, QueryPose costs 14G GPU memory with ResNet50 and 23G with HRNet-W48. The above dense competitors cost over 30G GPU memory with HRNet-W48. We observe that the main cost is caused by dynamic MLP (a single dynamic MLP contains 11.3M parameters). The proposed SPEGM and SIM only contains 1.1M and 0.62M parameters. This phenomenon inspires us to reduce the complexity of dynamic MLP module later. \n\nWe measure the training and inference times of QueryPose with different backbones.\nBackbone | AP| Training Time [h]| Inference Time [ms] \n--|--|--|--\nResNet50| 68.7| 42| 70\nHRNet-W32|72.4|58| 100\nHRNet-W48|73.6| 62| 105\nSwin-L | 73.3| 67| 117\n\nCompared with the representative dense competitors, we observe QueryPose is training-efficient with fewer training epochs. Furthermore, QueryPose eliminates time-consuming post-processes and achieves the faster inference speed. The training and inference times are measured on the same device if possible. The detailed comparisons are reported as follows:\nMethods | AP| Training Epoch| Training Time [h]| Inference Time [ms] \n--|--|--|--|--\nHrHRNet-W48 [16]| 69.8 |300 | >140h | 317\nSWAHR-W48 [39]|70.8| 300 | >140h| 319\nDEKR-W48 [24]| 71.0|140 | >80h| 284\nAdaptivePose-W48 [20]| 70.0|280 | 140h| 110\nLQR-W48 [40]| 72.4| 170 | 100h| 297\nQueryPose-W48| 73.6| 68 | 62h| 105\nWe also provide runtime performance in the revision.\n\n**Q2: While many ablations are provided to show the benefits of the proposed approach, it is worth pointing out that some simpler baselines are close in their performance (e.g. a simple summation instead of the SIM (Table 2a), or a dynamic MLP instead of the SPEGM (Table 1). I don’t know if the most compelling case has been made yet that the full pipeline in all its complexity is necessary. It feels as though a simpler set of layers and update rule could replace the SPEGM and SIM achieving the same level of performance. This criticism is perhaps a bit unfair, as the authors do clearly show that performance would be slightly worse with the simpler alternatives.**\n\n**A2**: Thanks. As reported in Table 1, first, we verify the superiority of part query over instance query across different feature interaction methods. Based on part query, we further validate the effectiveness of different feature interaction methods. Compared with dynamic MLP module with 11.3M parameters, we argue that SPEGM is more effective and efficient with only 1.1M parameters. It also reduces the training time and memory cost significantly with better performance. Furthermore, based on SPEGM, we verified different iteration schemes as shown in Table 2(a), SIM can improve the non-iteration and simple summation by 1.1 and 0.7 AP with 0.62M parameters, which proves SIM can adaptively boost the informative features and filter the noises with slight computational cost. Therefore, the proposed SPEGM and SIM are simple and efficient than other alternatives.\n\n**Q3: The communication and organization of the paper could be improved. The method itself is interesting, but it can be challenging to work through some of the unwieldy names and acronyms. Also, i found the overview figure (Fig. 2) difficult to make sense of.**\n\n**A3**: Thanks for your valuable comments. We explicitly define and explain the unwieldy names and acronyms in the revision. Moreover, we will augment the communication and re-organize the paper writing for clearer understanding. Figure 2 will be further improved for more intuitive illustration.\n\n**Q4: For the SIM, instead of the proposed weighted sum were any standard recurrent updates considered (e.g. a GRU update)?**\n\n**A4**: Thanks for your comments. We consider that the core insight of the proposed SIM is similar to GRU. Both of them leverage the update gate and reset gate to enhance the informative feature and filter the noise. SIM can be regard a simplified version of gated recurrent unit. We also attempt to use more complex structures in SIM, but it does not bring additional improvements.", " \n**Q1: The authors suggest in the abstract and introduction of the paper that one of the main benefits of the proposed method is to reduce the complexity of the inference. However, there is no evidence to support this point. Detailed comparisons of runtime performance such as FLOPs and inference time should be reported.**\n\n**A1**: Thanks for your valuable comments. The existing dense end-to-end methods always require the complex hand-crafted post-processes (e.g., NMS, grouping or refinements) to obtain the final results. We observe that the complex post-processes always consumes more time than network forward in existing methods. QueryPose eliminates the time-consuming post-processes and achieves a compact inference pipeline.\n\nWe report the accuracy, inference time and FLOPs of QueryPose with ResNet50, HRNet-W32, HRNet-W48 and Swin-L. \n| Backbone| AP | Time [ms] | FLOPs | \n|---- |----- |--- |- |\n|ResNet50 | 68.7 | 70 | 153G | \n|HRNet-W32 | 72.4 | 100 | 237G | \n|HRNet-W48 | 73.6 | 105 | 334G | \n|Swin-L |73.3 |117 | 545G | \n\nCompared with the dense end-to-end methods, QueryPose can achieve faster inference speed. For example, the inference time of QueryPose-R50 is 70ms with 153G FLOPs, which is faster than Mask RCNN-R50 (114ms) with 238G FLOPs. We further list the comparisons for inference time with the most competitive dense end-to-end methods. With the same backbone, QueryPose outperforms HrHRNet, SWAHR, DEKR, and LQR in terms of accuracy and speed. The detailed comparisons of runtime performance are reported in the revision.\n| Methods | Backbone | AP| Time [ms] | \n|---|-- |--|---|\n|HrHRNet [16] | HRNet-W48 |69.8 |317 | \n| SWAHR [39] | HRNet-W48| 70.8 | 339 | \n|DEKR [24] | HRNet-W48|71.0 | 284| \n|LQR [40] | HRNet-W48|72.4| 297 | \n| QueryPose| HRNet-W48 |73.6 | 105 | \n\n**Q2: The box detection pipeline can be replaced by a DETR-like architecture using object queries to further reduce the complexity of the model. I wonder if the authors have considered this design. If so, what’s the performance and why the current FPN-like architecture is preferred?**\n\n**A2**: Thanks for your insightful comments. Actually, the box decoder can be replaced by any DETR-like architecture. However, in the algorithm design process, we only concentrate on the keypoint decoder and aim to build a sparse end-to-end MPPE pipeline. Compared with DETR-like methods, Sparse RCNN avoids the cascade encoder and only interacts with the features of local ROIs, thus seems more simple in term of architecture. However, we observe a single dynamic MLP module used in Sparse RCNN is computationally expensive with 11.3M parameters. We will consider designing a more light module to replace dynamic MLP or adopting DETR-like architecture to reduce the network complexity in future work. We suppose the DETR-like architecture could still achieve equal performance.\n\n**Q3: It is interesting to see in the supplementary that the proposed method outperforms two-stage models on CrowdPose by a quite notable margin. I’d like to see a more detailed justification/discussion on the results.**\n\n**A3**: Thanks. Two-stage top-down approaches use two independent models including a human detector and a single-person pose estimator. The reasons that the proposed method outperforms two-stage models on CrowdPose can be summarized in two aspects:\n+ The reported two-stage methods generally use the dense human detector, which require the NMS to suppress the duplicates. However, in crowd scenes, the heavily overlapped instances may be removed after NMS, thus leading to inferior results. The phenomenon also can be observed in Table 11 of Sparse RCNN [27]. The sparse paradigm maybe more suitable for crowded scenes. \n+ The single-person pose estimator leverages the heatmap to independently represent the keypoint position, while losing the keypoint relations. In crowded scenes, the cropped single person image always contains the limbs of other persons. Learning the keypoint associations can avoid confusion. \n\nQueryPose leverages sparse paradigm to solve the MPPE and employ the multi-head self-attention to capture the relationships across different part queries, thus is more robust for crowded scenes. We also provide the discussions in Appendix.\n\n**Q4: Some comments to improve the clarity of the paper: 1. In Figure 2, it is unclear to see how the results from the previous stage are used as the input of the current stage. 2. The results on CrowdPose should be put into the main paper.**\n\n**A4**: Thanks for your valuable comments. We leverage the predicted box of previous stage as the proposal box of current stage. The keypoint coordinates are predicted independently for each stage. We will further improve Figure 2 to explicitly illustrate in the revision. Moreover, the results on CrowdPose will be moved into the main paper for comprehensive presentation. ", " This paper proposed an end-to-end multi-person pose regression framework to address the problem of high complexity and redundant post-processes existing in the previous work. To achieve this, the authors introduced the Spatial Part Embedding Generation Module (SPEGM) and the Selective Iteration Module (SIM) to encode human instances by a set of learnable part-level queries together with instance-level queries. Different from previous works, the part-level queries can capture the spatial details and structural information of human poses. Experimental results show that the proposed method achieved the state-of-the-art performance on MS COCO and CrowdPose datasets. __Strengths__\n\n1. This paper is overall well-structured and easy-to-follow.\n\n2. I think the problem this paper tackled is important for the community. Multi-person pose estimation is definitely an important problem as it is prohibitively expensive to collect ground truth with great human efforts for in-the-wild images.\n\n3. The proposed method is technically sound and novel.\n\n4. The results show that the performance of the proposed method is strong and even on par with two-stage methods.\n\n__Weaknesses__\n\n1. The authors suggest in the abstract and introduction of the paper that one of the main benefits of the proposed method is to reduce the complexity of the inference. However, there is no evidence to support this point. Detailed comparisons of runtime performance such as FLOPs and inference time should be reported.\n\n2. The box detection pipeline can be replaced by a DETR-like architecture using object queries to further reduce the complexity of the model. I wonder if the authors have considered this design. If so, what’s the performance and why the current FPN-like architecture is preferred?\n\n3. It is interesting to see in the supplementary that the proposed method outperforms two-stage models on CrowdPose by a quite notable margin. I’d like to see a more detailed justification/discussion on the results.\n\n4. Some comments to improve the clarity of the paper: 1. In Figure 2, it is unclear to see how the results from the previous stage are used as the input of the current stage. 2. The results on CrowdPose are important and should be put into the main paper.\n See \"Weaknesses\". Yes.", " The authors propose a method for multi-person pose estimation that builds off Sparse R-CNN with additional functionality to achieve accurate pose estimates. In Sparse R-CNN, there are reference boxes/queries that are iteratively updated to converge on a final set of detections. In this work, there are additionally query embeddings (Q_p) that are decoded to produce keypoint estimates. The key contributions of this work are the “Spatial Part Embedding Generation Module” (SPEGM) and the “Selective Iteration Module” (SIM) which are used to update Q_p.\n\n- SPEGM: Given a bounding box for a detected person, produce ROI aligned features from the box and use convolutions to predict a set of attention maps (each map corresponding to different body parts). Use these attention maps to perform a weighted pooling of the ROI aligned features to yield a set of embeddings E_p.\n- SIM: Given the current embeddings, predict weights to perform a weighted sum of Q_p and E_p providing a new Q_p which is then decoded for the current pose estimate.\n\nThis process is repeated multiple times as the bounding boxes and part queries are iteratively improved. When benchmarking on COCO this method outperforms prior end-to-end methods.\n\n Strengths:\n\n- There are a number of different pieces that come together in the proposed method, and the authors provide ablations showing that they all play a role (using part queries rather than predicting keypoints directly from the instance-level features, the SPEGM, the weighted sum for the SIM)\n- The results are strong, outperforming prior work on very competitive benchmarks notably without using test-time augmentation (TTA)\n\nWeaknesses:\n\n- This method seems as though it must be fairly expensive, the authors report splitting a batchsize of 16 across 8 A100s, it would be helpful to see comparisons in terms of memory use and training and inference time compared to other methods.\n- While many ablations are provided to show the benefits of the proposed approach, it is worth pointing out that some simpler baselines are close in their performance (e.g. a simple summation instead of the SIM (Table 2a), or a dynamic MLP instead of the SPEGM (Table 1)). I don’t know if the most compelling case has been made yet that the full pipeline in all its complexity is necessary. It feels as though a simpler set of layers and update rule could replace the SPEGM and SIM achieving the same level of performance. This criticism is perhaps a bit unfair, as the authors do clearly show that performance would be slightly worse with the simpler alternatives.\n- The communication and organization of the paper could be improved. The method itself is interesting, but it can be challenging to work through some of the unwieldy names and acronyms. Also, I found the overview figure (Fig. 2) difficult to make sense of.\n\nOverall:\n\nAltogether I think the method is interesting enough and the benchmark results speak for themselves to lean towards acceptance, but the quality and organization of writing could be improved to better communicate the proposed ideas and justify the full pipeline.\n\nPost rebuttal: Having read the other reviews and author response I lean towards accepting and raise my score. - For the SIM, instead of the proposed weighted sum were any standard recurrent updates considered (e.g. a GRU update)? no discussion included by authors", " The paper presents a query-based human pose estimation approach. It builds upon Sparse-RCNN and design Spatial Part Embedding Generation Module (SPEGM) and Selective Iteration Module (SIM) to further improve the performance. Experiments on COCO and CrowdPose validates the effectiveness of the proposed method.\n Pros:\n1.\tThe proposed method is interesting. It adopts the query-based detection framework for one-stage pose estimation and designs heads and interaction between instance query and pose query. \n2.\tIt achieves the state-of-the-art performance among the one-stage approaches.\n\nCons:\n1.\tTwo-stage vs one-stage, computational complexity\na)\tFrom my point of view, the proposed method is not a typical “one-stage” approach. One major drawback of the two-stage top-down approaches is that their computational cost increases with the number of people (This is also pointed out by the authors in Introduction). But the proposed method (QueryPose) also suffers from such problem. It first locates the bounding box and then estimates the human poses for each bounding box. Isn’t the whole pipeline similar to Mask-RCNN? So I think it is more like a top-down approach, and its performance should be compared with other top-down approaches.\nb)\tThe total computational complexity and runtime performance are concerning, but not reported in the paper. \n2.\tThe paper writing can be improved. Especially, Figure2 is not clear. \n See above. Yes", " The paper proposes a new method for multiperson-pose estimation in an end-to-end learnable scheme. It brings together ideas from Mask-RCNN/Roi-Align and Transformers/Attention in a cascade scheme in which the box and keypoint predictions are refined in several stages. The paper contains ablation studies which examine the effectiveness of the different modules and reports strong results on the COCO keypoints and CrowdPose pose estimation benchmarks. Strengths:\n\n+ Good experimental results on well-known competitive benchmarks.\n+ Although the method is complex, the paper does a good job explaining what each of the several modules does. Figure 2 is particularly helpful.\n+ Authors promise to share code. This is very helpful because the method is quite complex and it would be very hard to reproduce the reported results using the description provided in the paper.\n\nWeaknesses:\n- The proposed method is quite complex. There are several modules involved.\n- No runtime information is provided. It is hard to tell how fast/slow the method is. Both training and inference times need to be provided in the updated manuscript.\n\nSome minor typos:\n\nTypo in Fig 2(e), Normalizing\nLine 166: Multilayer Perceptron I think the key missing information is how fast/slow the proposed method is during training and inference times.\n\nThe proposed system is quite complex and hard to reproduce given the information provided in the paper or supplementary material. However sharing the code as the authors intend to do based on the abstract will be sufficient to address this point. Not very much applicable to this paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "GR9_8qWy5gb", "sQ_XoyN1OhQ", "i6tYPys7Rvp", "jTh-x0JioAD", "W3WzaiZnt6_", "nips_2022_tbId-oAOZo", "ymKHlXYAZAg", "9ljvollRits", "9Zm4EN68RJ-", "8exCq9F5UO", "nips_2022_tbId-oAOZo", "nips_2022_tbId-oAOZo", "nips_2022_tbId-oAOZo", "nips_2022_tbId-oAOZo" ]
nips_2022_OHkq7qNr72-
A Mixture Of Surprises for Unsupervised Reinforcement Learning
Unsupervised reinforcement learning aims at learning a generalist policy in a reward-free manner for fast adaptation to downstream tasks. Most of the existing methods propose to provide an intrinsic reward based on surprise. Maximizing or minimizing surprise drives the agent to either explore or gain control over its environment. However, both strategies rely on a strong assumption: the entropy of the environment's dynamics is either high or low. This assumption may not always hold in real-world scenarios, where the entropy of the environment's dynamics may be unknown. Hence, choosing between the two objectives is a dilemma. We propose a novel yet simple mixture of policies to address this concern, allowing us to optimize an objective that simultaneously maximizes and minimizes the surprise. Concretely, we train one mixture component whose objective is to maximize the surprise and another whose objective is to minimize the surprise. Hence, our method does not make assumptions about the entropy of the environment's dynamics. We call our method a $\textbf{M}\text{ixture }\textbf{O}\text{f }\textbf{S}\text{urprise}\textbf{S}$ (MOSS) for unsupervised reinforcement learning. Experimental results show that our simple method achieves state-of-the-art performance on the URLB benchmark, outperforming previous pure surprise maximization-based objectives. Our code is available at: https://github.com/LeapLabTHU/MOSS.
Accept
This paper proposes a method for unsupervised skill discovery, which learns a mixture of policies that simultaneously maximizes and minimizes the surprise. All the reviewers agree that the paper tackles an important active research area. The paper is well written; the motivation is well explained; the proposed method is simple and easy-to-implement; and it performs well on the benchmark. Several concerns were raised by the reviewers, including novelty, delta beyond the interpolated CIC, and the choice of M. The rebuttal and the additional experimental results have addressed some of these concerns. One of the main criticisms, choice on when to switch the objectives, still remains after the rebuttal. Although the current heuristic is simple, it seems ad-hoc, without good justifications. After discussion with reviewers, we agree that this limitation is compensated by the other contributions of the paper, and helps to set the stage for future work that does optimize M or devises a new method that does not rely on it. Thus, we recommend accepting this paper.
train
[ "WcEprqb2qF", "D1ZKeQ9NO0P", "cvVk_kBg-So", "isIvJmqOCHv", "LHDNuxz1ecB", "JUkXFvOoPK", "hepydie_US_", "QdyLeY1rsdm", "hdFH16srl7z", "DC7do0cQ9_C", "QhvG92gH0zP", "AXEBllPVy8X", "-mPx8SsbOag" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your responses. They comprehensively addressed my concerns.", " We appreciate the follow-up response from the reviewer. Although our objective switching mechanism appears to be ad-hoc, we consider its simplicity a strength for the following reasons: it does not increase the computational cost during training, does not require hyper-parameter tuning, and performs well. We agree that more sophisticated methods might perform better. However, we are not sure that the performance gain outweighs the possible increase in complexity (e.g., computational cost or hyper-parameter tuning.) We welcome any additional suggestions from the reviewer. Thanks!", " Thank you for the response, I think these changes really improve the paper. However, since most of my comments/concerns that can addressed in the rebuttal period were relatively minor, and my major concern around how to switch objectives is something that can only addressed in future work, I plan to keep my original rating. ", " Thank you to everyone involved in this reviewing process. We appreciate all four reviewers' insightful comments on our submission. We have carefully addressed all reviewers' comments and responded back to each reviewer separately. We also uploaded a revised submission that incorporated all reviewers' suggestions. To ease the reviewing process, we summarize the major changes below. We welcome further comments and suggestions that improve the quality of our submission. Thank you!\n- We added MOSS results in Fig 4b.\n- We modified Section 4 (L174) to clarify :\n - the novelty of our method in comparison to previous work.\n - the distinction between MOSS objective and a scaled CIC.\n- We added ablations on the prior distribution of $M$ in Appendix C1. \n- We added results for Adversarial Surprise to the main results (Fig 2 and Table 1). \n- We moved the related work that was in the discussion section to Section 4.\n- We moved the benchmark render figures to the appendix (Figure 4).\n- In Section 5, we removed a confusing sentence.\n- We revised and updated the discussion & limitation section to deepen the discussion about objective switching. \n- We revised and updated the paper with reference to URLB to clarify the definitions of \"Competence\"-, \"Data\"-, and \"Knowledge\"-based methods.\n- We revised and updated the paper to fix the characterization of the objective function.\n- We revised and updated the paper to fix the ambiguity between the objective function and the optimization procedure.\n- We revised and updated the paper to include the zero-shot results of MOSS along with the fine-tuning curves in Appendix C.\n- We revised and updated the paper to include ablations on the skill vector in Appendix C2.\n- We revised and updated the paper to clarify the statement in L170 of the previous revision.\n\nFinally, for citations that appear in the rebuttal, numerical citations match the reference numbers in the pdf with the appendix; alphabetical citations are newly added references for each reply and are listed at the end of the reply.\n\n", " We want to thank the reviewer for the very detailed suggestions.\n\n\n## Response:\n\n**\"There is some undefined terminology. \"Competence\"-, \"Data\"-, and \"Knowledge\"-based methods.\"** \\\nThanks for the suggestions. We added a footnote to refer readers to the URLB paper for their definitions.\n\n**\"Can you fix the characterization of the objective function?\"** \\\nWe used the term \"state-transition\" to refer to $p(s^{\\prime},s)$. We agree with the reviewer that this might be misleading as the term \"state-transition\" usually refers to $p(s^{\\prime}\\mid s)$. We updated the paper accordingly where we now refer to $p(s^{\\prime},s)$ as the joint distribution of state-transitions and $p(s^{\\prime}\\mid s)$ as state-transition.\n\n**\"Can you fix the ambiguity in the connection between the objective function and the optimization procedure?\"** \\\nYes, Alg 1 is correct. Our agent uses Eq. 9 (i.e., the KNN estimator) as the only intrinsic reward during pretraining. We clarified the ambiguity in footnote of page 6. In short, the implementation provided by CIC (https://github.com/rll-research/cic) is based on the arxiv version (https://arxiv.org/abs/2202.00161v2) which is an updated version of the ICLR version (https://openreview.net/forum?id=kOtkgUGAVTX). In particular, in the arxiv version, the intrinsic reward only relies on the KNN joint entropy estimates and NCE is only used to train $\\phi_s$, $\\phi_z$, and $\\psi$. While the ICLR version (page 5) includes the NCE term in the intrinsic reward.\n\n**\"The experiments do not adequately illustrate the effect that finetuning has on the pretrained policy.\"** \\\nThanks for the suggestion. We added zero-shot results and finetuning learning curves in the appendix. In particular, Quadruped is an example of domain where the policy does not benefit significantly from finetuning (compared to Walker and Jaco). We note that this trend is not specific to MOSS, i.e., we observe a similar trend for CIC and Adversarial Surprise [18]. We present the zero-shot results of MOSS_min and MOSS_max on Quadruped in the table below\n\n| | Quadruped Jump | Quadruped run | Quadruped Stand | Quadruped Walk |\n| --------------- | -------------- | ------------- | --------------- | -------------- |\n| MOSS_min,frozen | 85.3±9.5 | 45±4 | 110±13 | 43±4 |\n| MOSS_max,frozen | 627±28 | 370±16 | 767±35 | 306±12 |\n| MOSS Finetuned | 674±11 | 485±6 | 911±11 | 635±36 |\n\n\n**\"The current ablation does not illustrate the importance of the choice of the structure of the $Z$ priors, which includes both the dimensionality $d$, as well as the event space.\"** \\\nWe agree with the reviewer that the configuration of the skill vector may depend on the environment and downstream task. \nWe added the ablation suggested by the reviewer in Appendix C2. Below, we ran experiments for two domains: Quadruped and Walker and provide a table of the results. \n||Continuous 1|Continuous 16|Continuous 64 (MOSS)|Discrete 1|\n|:-|:-|:-|:-|:-|\n|Walker Flip|827±50|934±12|729±40|672±9|\n|Walker Run|329±41|245±44|531±20|335±20|\n|Walker Stand|930±7|830±25|962±3|931±8|\n|Walker Walk|954±4|960±2|942±5|751±70|\n|Quadruped Jump|193±27|649±25|674±11|194±50|\n|Quadruped Run|167±17|444±26|485±6|162±20|\n|Quadruped Stand|306±32|834±33|911±11|267±38|\n|Quadruped Walk|158±19|569±49|635±36|217±23|\n\nWe find that a continuous skill performs better than a discrete skill but not by a large margin. However, it seems that in general, a higher dimensional skill performs better, especially in Quadruped which has a higher dimensional state and action space than the Walker domain. Our results confirm the reviewer's intuition and suggest that the optimal skill vector is task-dependent.\nWe note that hyper-parameters in MOSS (e.g., skill dimension) are not tuned and simply taken from CIC for fair comparison.\n", " **\"Can you include an empirical comparison to [18]?\"** \\\nThanks for this suggestion. We added a comparison with Adversarial Surprise (AS) [18] to Section 5 of the paper. We also list the numerical results in the table below:\n||Walker Flip|Walker Run|Walker Stand|Walker Walk|Quadruped Jump|Quadruped Run|Quadruped Stand|Quadruped Walk|Jaco Bottom Left|Jaco Bottom Right|Jaco Top Left|Jaco Top Right|\n|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|\n|MOSS|729±40|531±20|962±3|942±5|674±11|485±6|911±11|635±36|151±5|150±5|150±5|150±6|\n|AS-ALICE-4 MIL|491±20|211±9|868±47|655±36|415±20|296±18|590±41|337±17|109±20|141±19|140±17|129±15|\n|AS-ALICE-2MIL |469±13|195±14|914±17|655±33|380±18|279±9|543±21|276±19|114±20|158±15|143±13|120±20|\n|AS-BOB-4MIL|475±16|247±23|917±36|675±21|449±24|285±23|594±37|353±39|116±21|166±12|143±12|139±13|\n|AS-BOB-2MIL|474±23|223±15|937±15|687±42|427±20|262±8|590±22|314±23|118±21|168±15|154±13|121±14|\n\nSince Alice and Bob do not share parameters, we show finetuning results pretrained using 4 million environment steps, splitting 2 million to each Alice and Bob. We notice a slight increase in performance with longer pretraining for AS. Results show that AS is a competitive approach in the URL setting. In addition, AS obtained a noticeably high performance in the Jaco domain. But, overall, MOSS outperforms AS by a significant margin.\n\n**\"L170: \"Intuitively, the NCE loss [ 23 ] pushes $p(s^{\\prime} \\mid s, z)$ to be a delta-like density function. However, since we define behaviors with $n$ steps, we want a wider distribution\" The second sentence does not follow from the first. Even if $p(s^{\\prime} \\mid s, z)$ is deterministic, it can still be applied for $N$ steps, to yield an $N$-step behavior. Can the paper clarify these statements?\"** \\\nWe updated the paper to clarify this sentence. In short, NCE involves a temperature term that controls the weight given to negative samples. A low-temperature term pushes the training dynamics to assign a lot of densities to a given positive pair.\n\nSince, there is not a single positive pair, but $n$ positive pairs (a skill is defined by $n$ steps), we need a high-temperature term (e.g., 0.5) so that a single positive pair is not assigned too much density. \n\n**\"Can you explain the following inconsistency? L260 claims that setting Z_min = Z_max is equivalent to CIC, yet this results in different empircal performances in Fig 4b. By L260, we should expect their results to be equivalent. Why are they not? Or perhaps it's not equivalent to CIC, because it involves flipping the sign of the intrinsic reward function?\"** \\\nYes you are right, it's not equivalent to CIC because it involves flipping the sign of the intrinsic reward function.\n\n**\"By the paper's definition of surprise as the entropy of the joint distribution of state transitions, the method is not optimizing only surprise (since it's just one of the two terms of the mutual information). The method name then becomes something of a misnomer...\"** \\\nWe agree with the reviewer that the optimization involves both terms. However, we named our method using the task of the RL agent, which is to maximize intrinsic reward - in our case, only involving surprise.\n\n## Summary:\n- We revised and updated the paper with reference to URLB for the definitions of \"Competence\"-, \"Data\"-, and \"Knowledge\"-based methods.\n- We revised and updated the paper to fix the characterization of the objective function.\n- We revised and updated the paper to fix the ambiguity between the objective function and the optimization procedure.\n- We revised and updated the paper to include the zero-shot results of MOSS along with the fine-tuning curves in Appendix C3.\n- We revised and updated the paper to include ablations on the skill vector in Appendix C2.\n- We revised and updated the paper to include results from Adversarial Surprise to the main results.\n- We added MOSS results into Fig 4b.\n- We revised and updated the paper to clarify the statement in L170 ", " We want to thank the reviewer for the detailed and insightful review.\n\n\n## Response:\n\n**\"Some of the related work that also proposes to maximize and minimize surprise was only mentioned in the last section. I think this should be moved up to be in Section 3 instead, where other related works are discussed.\"** \\\nWe followed the reviewer's suggestion and updated the paper accordingly. Because Section 3's title focuses on competence-based methods and the related work in the last section is not on competence-based methods, we moved this part into Section 4 to better clarify our novelty compared with other approaches.\n\n**\"In the Section 5, you say “but performed well on the standing task, which is inherently a low entropy task.” → did you mean to say high-entropy task instead?\"** \\\nAt first, we did mean that the standing task is inherently a low entropy task. For this task, after the agent gets upright in a short amount of time, the agent does not move much for the remaining of the episode, which means overall, the joint $p(s,s^{\\prime})$ has low entropy. \n\nIn this revision, we agree that this sentence was a bit confusing and not precise. Therefore, we removed this remark from the final version of the paper.\n\n**\"Given that the paper builds very closely on CIC, a more detailed exposition on CIC and limited discussion of other techniques would have sufficed.\"** \\\nWe agree with the reviewer. In fact, this revision describes the objective function of CIC in Section 3.2. And, for clarity, Section 4 only contains content related to our contribution, while Sections 3 & 2 introduce the necessary background upon which MOSS builds. Furthermore, we believe that the brevity of Sec 4 is representative of the simplicity of our method, which we consider a strength.\n\n**\"The choice of when to switch between a surprise minimizing and maximizing objectives is somewhat ad-hoc.\" AND \"I don't think the limitations are particularly well-addressed in the paper. ... I think a more detailed discussion on how to make the objective switching more adaptive would make the limitations section much stronger.\"** \\\nWe agree with the reviewer's opinion on this issue and have updated the limitation section accordingly. Compared to more sophisticated methods (e.g., Appendix B) for setting $M$ (i.e., switching the objective), our heuristic does not increase the computational cost during training, does not require hyper-parameter tuning, and performs well.\n\nIn our revised and updated paper, we proposed a meta controller [28] as possible future work. In particular, the meta controller would be trained to identify the optimal lower-level skill to perform, the steps each skill should take, and the objective to focus on. This meta controller will act according to higher-level abstractions and possibly incentivize structured deep exploration [43].\n\n## Summary:\n- We moved the related work that was in the discussion section to Section 4.\n- We removed a confusing sentence from Section 5.\n- We updated the discussion & limitation section to deepen the discussion about objective switching.", " We want to thank the reviewer for the insightful feedback.\n\n## Response\n**\"Already exist methods which try to maximize and minimize surprise at the same time. Thus, it is difficult to find novelty in the proposed method.\" AND \"The authors should try to motivate the novelty of the paper a bit more.\"** \\\nThanks for the suggestion. We have updated Section 4 accordingly to better clarify our novelty. The challenge is how to optimize these two opposite objectives. Thus, we consider that our novelty lies in the simplicity of how we maximize and minimize surprise simultaneously. In particular,\n- Prior work relies on an adversarial game [18] known to be challenging to train [37]. Instead, we propose a simple mixture of policies. Our implementation only requires a minor change to CIC.\n- Prior work builds upon an on-policy algorithm that requires training two policies (Alice and Bob) [18] that do not share parameters. Instead, our method builds upon an off-policy algorithm that uses a single network for both objectives, resulting in higher sample efficiency [b].\n\nIn the table below, we report the experimental results of AS [18] to compare performance with MOSS on URLB and update the paper's main results (Figure 2 and Table 1).\n\n||Walker Flip|Walker Run|Walker Stand|Walker Walk|Quadruped Jump|Quadruped Run|Quadruped Stand|Quadruped Walk|Jaco Bottom Left|Jaco Bottom Right|Jaco Top Left|Jaco Top Right|\n|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|\n|MOSS|729±40|531±20|962±3|942±5|674±11|485±6|911±11|635±36|151±5|150±5|150±5|150±6|\n|AS-ALICE-4 MIL|491±20|211±9|868±47|655±36|415±20|296±18|590±41|337±17|109±20|141±19|140±17|129±15|\n|AS-ALICE-2MIL |469±13|195±14|914±17|655±33|380±18|279±9|543±21|276±19|114±20|158±15|143±13|120±20|\n|AS-BOB-4MIL|475±16|247±23|917±36|675±21|449±24|285±23|594±37|353±39|116±21|166±12|143±12|139±13|\n|AS-BOB-2MIL|474±23|223±15|937±15|687±42|427±20|262±8|590±22|314±23|118±21|168±15|154±13|121±14|\n\nWe include the results for Alice and Bob since the original paper did not show how to choose the policy to finetune for downstream tasks. Additionally, since Alice and Bob are parameter-independent policies, we include the results for both pretrained using 2 mil and 4mil environment steps for easy comparison. From these experimental results, MOSS outperforms AS in URLB.\n\n**Can this method also be used with demonstrations?** \\\nThanks for the suggestion. Yes, we believe our method can be used with demonstrations. Unsupervised reinforcement learning methods (e.g., MOSS) are generally used to pretrain a policy (and critic) for faster adaption to downstream tasks [31]. If we already have demonstrations, the downstream task is probably known or can be inferred using inverse reinforcement learning methods.\nUnfortunately, we did not find a method that uses unsupervised reinforcement learning pretraining and then finetuned to demonstration data. Therefore, we did a proof-of-concept experiment by running the following demonstration experiment in the Walker domain in DMC: \\\nWe trained 3 differently seeded expert policies on each of the 4 tasks in the Walker domain. At the end of training, we constructed a dataset of 1 million state-action tuples by rolling them out for each seed. Finally, we trained using these datasets by behavior cloning using policy networks that are either (1) **Scratch**: randomly initialized or (2) **Pretrained**: pretrained using MOSS. We report each task's mean and standard error below (each seed is evaluated for 50 episodes).\n\n||Walker Walk|Walker Run|Walker Flip|Walker Stand|\n|-|:-:|:-:|:-:|:-:|\n|**Scratch**|283.8±5.0|120.2±6.2|256.5±7.0|437.0±14.1|\n|**Pretrained**|312.2±4.6|119.5±0.3|261.6±3.1|463.1±9.9|\n\nWe observe that pretrained policies perform better. Overall, we believe that a detailed investigation on this topic could be done for future work.\n\nFurthermore, recent work [a] used unsupervised reinforcement learning (URL) methods to collect data for offline reinforcement learning - a setting similar to demonstrations but with the addition of reward signals. The paper demonstrates that the dataset collected using URL methods is better than a dataset collected only using data from an expert trained using supervised RL. The paper argues that URL methods collect data samples that \"maximize various notions of coverage or diversity\" [a], which helps offline RL.\n\n## Summary\n- We updated Sec 4 (L174-191) to clarify the novelty of our method in comparison to previous work.\n- We added results for Adversarial Surprise to the main results (Fig 2 and Table 1).\n\n## References\n[a] Yarats, D., Brandfonbrener, D., Liu, H., Laskin, M., Abbeel, P., Lazaric, A., & Pinto, L. (2022). Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning. arXiv preprint arXiv:2201.13425. \\\n[b] Yarats, D., Zhang, A., Kostrikov, I., Amos, B., Pineau, J., & Fergus, R. (2021, May). Improving sample efficiency in model-free reinforcement learning from images. AAAI", " We want to thank the reviewer for the thoughtful feedback.\n\n## Response:\n**\"It would be better to add MOSS results in Figure 4b) for direct comparison.\"** \\\nWe added MOSS results in Figure 4b as suggested. Please see the updated paper.\n\n**\"It's unclear how the proposed method resolves the issue\"** \\\nWe resolved the issue of optimizing two opposite objectives using a mixture of policies, i.e., we view MOSS as a mixture of experts [a]. Since the mixture of policies builds on a competence-based approach, MOSS has two disjoint skill sets. MOSS separates the two opposite objectives using a different skill set for each objective: one skill set for surprise maximization and another for surprise minimization.\n\n**\"I suggest the author make a clear distinction between MOSS objective and a linear interpolation of CIC and negative CIC (or equivalently a scaled CIC). It makes sense that MOSS_SAME performance is in between CIC and negative CIC; the question is why MOSS can do better. One would argue that in expectation, MOSS is still just a linear interpolation of CIC and negative CIC (or a scaled CIC). Why is this not the case?\"** \\\nWe agree with the reviewer's understanding and modified the beginning of Section 4 accordingly to reflect the distinction between the MOSS objective and a linear interpolation of CIC and negative CIC (i.e., scaled CIC). \n\nWe agree with the reviewer that MOSS_SAME is a linear interpolation of CIC and negative CIC. Indeed, in MOSS_SAME, we directly add the CIC and negative CIC objective, giving us an objective that still corresponds to either maximizing or minimizing. \n\nIn contrast, MOSS' objective corresponds to maximizing and minimizing surprise simultaneously. Using a mixture of policies, we get two disjoint skill sets. We separate the two opposite objectives using a different skill set for each objective: one skill set for surprise maximization and another for surprise minimization. Indeed, our main contribution is this mixture of experts, and removing it from MOSS yields MOSS_SAME.\n\n**\"How do you sample $M$?\"** \\\nIn MOSS, we deterministically set $M=0$ first half of an episode and $M=1$ for the second half of the episode. Such a deterministic rule makes no assumptions about the environment's entropy. Compared to more sophisticated methods (e.g., Appendix B) for setting $M$, our heuristic does not increase the computational cost during training, does not require hyper-parameter tuning, and performs well.\n\n**\"Suppose $M$ is sampled from $\\operatorname{Bernoulli}(p)$. Does setting $p=0$ give you CIC and setting $p=1$ give you negative CIC? What are the optimal p's for different environments? Would that somewhat correspond to environmental stochasticity based on your assumption?\"** \\\nWe agree with the reviewer that If $M$ is a Bernoulli, then setting $p=0$ gives CIC, and $p=1$ gives Negative CIC.\n\nWe also agree with the reviewer's comments on $p$. The optimal $p$ tends to align with our assumption of the environment. For example, in the URLB environment, we tried sampling with $p=0.3$ (biased towards entropy minimization), and the performance dropped drastically compared to $p=0.6$. However, we emphasize that in our final method, we did not sample $M$ from a Bernoulli because we observed a higher variance across seeds compared to the deterministic rule. \n\n\n**\"Show in the experimental section how the prior distribution of $M$ affects MOSS performance\"** \\\nThanks for the suggestion. Due to the lack of space, we added the suggested experiments in Appendix C1. In MOSS, we deterministically set $M=0$ for the first half of the steps and $M=1$ for the other half of the steps in the episode. This corresponds to having $50$% of maximization data and $50$% of minimization data. Below, we report results for different ratios of maximization and minimization where 50% corresponds to MOSS:\n\n| | Walker Flip | Run | Stand | Walk |\n| ---------------- | ----------- | ------ | ----- | ------ |\n| 30% maximization | 723±36 | 515±32 | 967±3 | 921±18 |\n| 40% | 738 ±42 | 526±15 | 968±2 | 908±24 |\n| 50% (MOSS) | 729±40 | 531±20 | 962±3 | 942±5 |\n| 60% | 781±38 | 599±11 | 967±1 | 935±6 |\n| 70% | 785±39 | 567±15 | 965±3 | 933±7 |\n\nIn short, on Walker, a prior towards entropy maximization results in a better performance than a prior towards entropy minimization. \n\n\n## Summary\n- We added MOSS results into Fig 4b.\n- We modified Section 4 (L174) to clarify:\\\n (1) how the proposed method resolves the issue of optimizing two opposite objectives. \\\n (2) the distinction between MOSS objective and a scaled CIC.\n- We added ablations on the prior distribution of $M$ in Appendix C1.\n\n## References\n[a] https://en.wikipedia.org/wiki/Mixture_of_experts", " The paper presents an improved algorithm based on CIC (Laskin et al. 2022), to balance surprise maximization and minimization for RL pertaining. Experiments on a variety of tasks show the effectiveness of the proposed methods. The motivation of balancing maximizing and minimizing surprises is well explained, but it's unclear how the proposed method resolves the issue and how much it is different from a scaled CIC. I suggest the author make a clear distinction between MOSS objective and a linear interpolation of CIC and negative CIC (or equivalently a scaled CIC), and also show in the experimental section how the prior distribution of M affects MOSS performance. - It would be better to add MOSS results in Figure 4b) for direct comparison. It makes sense that MOSS_SAME performance is in between CIC and negative CIC, the question is why MOSS can do better. One would argue that in expectation MOSS is still just a linear interpolation of CIC and negative CIC (or a scaled CIC). Why is this not the case?\n- How do you sample M? Suppose it's sampled from Bernoulli(p). Does setting p=0 give you CIC and settting p=1 give you negative CIC? What are the optimal p's for different environments? Would that somewhat correspond to environmental stochasticity based on your assumption? Limitations and the potential negative impact were discussed at the end of the paper.", " The paper proposes a new method called Mixture of Surprises (MOSS), which tries to maximize and minimize the surprise at the same time. This results in state-of-the-art performance on the URLB benchmark. Strengths: \nThe paper is well written and covers the related literature quite well. The experiments are well done on variety of different environments. The proposed method also performs well on the URLB benchmark. \n\n\nWeakness: \nAlready exist methods which try to maximize and minimize surprise at the same time. Thus, it is difficult to find novelty in the proposed method. Can this method also be used with demonstrations? The authors should try to motivate the novelty of the paper a bit more. ", " The paper presents a method for unsupervised skill discovery via reinforcement learning. Unlike most prior methods which seek to either maximize or minimize surprise, this paper proposes to learn a mixture policy where one component of the mixture is trained to minimize surprise, and another component is trained to maximize surprise. The paper builds very closely on the work of Laskin et al [1] (CIC), and the proposed change to CIC is very small (just a one-line change in Algorithm 1). Essentially, the proposed method partitions the latent skill space into two parts, where one part is trained to maximize surprise, and the other part if trained to minimize surprise. Whether the agent chooses to sample from the surprise-maximizing part of the skill space or the surprise-minimizing part is somewhat arbitrary: the agent uses surprise maximizing skills for the first half of an episode, and surprise-minimizing skills for the second half of the episode. \n\nThe authors evaluate their method on the URLB benchmark and VizDoom tasks, and show that their method slightly outperforms CIC on both unsupervised learning metrics as well as downstream task performance. The main advantage of the method is that, unlike CIC, we do not need to pre-specify whether the agent should minimize or maximize surprise, and the algorithm is able to do well in environments that require either. \n\n[1] Michael Laskin, Hao Liu, Xue Bin Peng, Denis Yarats, Aravind Rajeswaran, and Pieter Abbeel. CIC: Contrastive intrinsic control for unsupervised skill discovery, 2022. Strengths\n- Simple, easy-to-implement method. \n- A somewhat more general technique for unsupervised reinforcement learning as compared to prior methods. \n- Writing quality is good. \n\nWeaknesses\n- The choice of when to switch between a surprise minimizing and maximizing objectives is somewhat ad-hoc. Authors investigate a slightly more sophisticated procedure in the Appendix, but find it to be very sensitive to hyperparameter choices, and omit it from the main paper. \n- Overall writing quality is good, but the authors spend a lot of time (2.5 pages) going over background material and summarizing past work, while the main method section is about 1 page long, including an algorithm box. The main method section does not start until the second half of page 5! Given that the paper builds very closely on CIC, a more detailed exposition on CIC and limited discussion of other techniques would have sufficed. \n- Some of the related work that also proposes to maximize and minimize surprise was only mentioned in the last section. I think this should be moved up to be in Section 3 instead, where other related works are discussed. \n In the Section 5, you say “but performed well on the standing task, which is inherently a low entropy task.” → did you mean to say high-entropy task instead? \n I don't think the limitations are particularly well-addressed in the paper. The main limitation mentioned was that the proposed method is only applicable to episode RL settings (and not reset-free settings), but this seems slightly orthogonal to the main issue of the paper. I think a more detailed discussion on how to make the objective switching more adaptive would make the limitations section much stronger. ", " This paper proposes an unsupervised RL algorithm for pretraining a policy. The algorithm is motivated by learning to both maximize and minimize the mutual information between latent skill variables and the state-transition joint distribution. The method approximates this mutual information following prior work, and proposes a learning algorithm that simply splits up the pretraining episodes into two phases: the first phase with intrinsic rewards corresponding to maximization, and the second phase corresponding to minimization. Experiments on the environments and tasks from the benchmark of [28] illustrate that this method achieves superior finetuning results compared to a variety of other methods. Ablations illustrate that this method has less degradation in performance than a maximizing phase alone (corresponding to prior work) when additional stochasticity is added to the transition dynamics, and an ablation that illustrates the importance of having the latent skill priors be different. # Strengths\n- The area of investigation is relevant to the community. Pretraining / unsupervised learning for RL agents is an active area of research interest.\n- The proposed method is technically sound.\n- The proposed method is technically straightforward.\n- The proposed method demonstrates compelling performance compared to prior work.\n- The experimental evaluation uses compelling aggregate statistics\n- The experimental evaluation compares to a large variety of prior methods\n\n# Weaknesses\n- There is some undefined terminology. \"Competence\"-, \"Data\"-, and \"Knowledge\"-based methods were never defined. This makes some parts hard to understand, e.g. L99 \"To motivate the potential of competence-based methods over data-based or knowledge-based methods\". I recognize that these terms come from CIC, but this paper should either restate the definitions or refer to CIC.\n- I think the objective function is mischaracterized. L183 \"In particular, we define surprise as the state-transition entropy, which corresponds to maximizing and minimizing $H[S', S]$\". Unless I misunderstand the notation, this is the entropy of the *joint* distribution $p(s',s)$, which is *not* the entropy of the state-transition dynamics distribution $p(s'|s,a)$, nor the entropy of the state-transition dynamics induced by the policy $p(s'|s) = E_\\pi p(s'|s,a)$ (which are the only two distributions that I would refer to as being \"state-transitions\"). Therefore, I would not say that the objective is the state-transition entropy. It is *related* to the state transition entropy, since $H(S',S) = H(S'|S) + H(S)$, i.e. it is the entropy of the induced Markov dynamics plus another entropy. The problem with referring to this as only the \"state-transition\" entropy is that it fails to capture the fact that the value of H(S) affects the resulting value, so I think it is a mischaracterization of the objective. I think it is important to be very precise about this naming, because, in general, optimizing $H(S'|S)$ will lead to very different results than optimizing $H(S'|S)+H(S)$. \n- The connection between the original objective and the resulting optimization procedure is unclear. The objective is $I((S', S) ; Z) = H((S', S)) - H((S', S) | Z)$. The CIC paper, upon which Alg 1 is based, uses both a noise-contrastive estimator (for the latter term) and a KNN estimator (for the former term). Alg 1 states the intrinsic reward is computed using Eq. 9, but this seems to only involve the noise constrastive estimator. Is Alg 1 correct? Where is the KNN estimator?\n- The experiments do not adequately illustrate the effect that finetuning has on the pretrained policy. It is possible that the pretrained policy does not benefit from finetuning in some cases, e.g. taking $z \\sim Z_{min}$ might lead to a high score on environments where the downstream tasks is very related to entropy minimization. Can you add to the evaluations two baselines for non-finetuned policies constructed from each of the $Z$ distributions (i.e. one for $\\pi(a|s, z\\sim p(Z_{max}))$, another for $\\pi(a|s, z\\sim p(Z_{min})$)? These should not be finetuned, instead, simply illustrate how related the learned policy is to the downstream task. I would call these something like $MOSS_{max,frozen}$ and $MOSS_{min,frozen}$ . Note that this doesn't require any additional learning, just an evaluation of these existing policies. Beyond this, you could also include the finetuning learning curves in the appendix, although that's a bit less important.\n- The current ablation does not illustrate the importance of the choice of the structure of the Z priors, which includes both the dimensionality $d$, as well as the event space. One thing that seems most natural is simply to take z \\in {0,1}, where 0 defines 'minimizing' and 1 defines 'maximizing', i.e. use $P_{min}(Z) \\doteq \\delta(z=0)$, and $P_{max}(Z)\\doteq \\delta(z=1)$. It is not clear what the additional dimensionality of $Z$ is useful for (the appendix tells us its 64 dimensional). Some discussion and empirical are needed to justify this choice. It could be the case that very low dimensional Zs are still useful in some (or all) tasks, intuitively, these tasks might be those in which there is only one 'useful' way each to minimize or maximize entropy, rather than environments in which repertoires (>=2 each) of entropy maximizing and minimizing skills is useful. Defining the skill variables to be continuous essentially admits learning an infinite number of skills, but it is not clear that this is the best choice.\n- Although this paper pointed out the relevance, it did not include an empirical comparison to [18], which is also motivated by the task of learning to both maximize and minimize rewards. The inclusion of this comparison would significantly strengthen the empirical results.\n\n# Minor weaknesses\n- Please add the main method's scores to Figure 4(b).\n- By the paper's definition of surprise as the entropy of the joint distribution of state transitions, the method is not optimizing *only* surprise (since it's just one of the two terms of the mutual information). The method name then becomes something of a misnomer... - Please address the weaknesses:\n - Can you fix the terminology about the three types of methods?\n - Can you fix the characterization of the objective function?\n - Can you fix the ambiguity in the connection between the objective function and the optimization procedure?\n - Can you include the two baseline evaluations I requested?\n - Can you include the ablations I requested?\n - Can you include an empirical comparison to [18]?\n- L170: \"Intuitively, the NCE loss [ 23 ] pushes p(s′ | s, z) to be a delta-like density function. However, since we define behaviors with n steps, we want a wider distribution\" The second sentence does not follow from the first. Even if p(s'|s,z) is deterministic, it can still be applied for N steps, to yield an N-step behavior. Can the paper clarify these statements?\n- Can you explain the following inconsistency? L260 claims that setting Z_min = Z_max is equivalent to CIC, yet this results in different empircal performances in Fig 4b. By L260, we should expect their results to be equivalent. Why are they not? Or perhaps it's not equivalent to CIC, because it involves flipping the sign of the intrinsic reward function? Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "JUkXFvOoPK", "cvVk_kBg-So", "hepydie_US_", "nips_2022_OHkq7qNr72-", "-mPx8SsbOag", "-mPx8SsbOag", "AXEBllPVy8X", "QhvG92gH0zP", "DC7do0cQ9_C", "nips_2022_OHkq7qNr72-", "nips_2022_OHkq7qNr72-", "nips_2022_OHkq7qNr72-", "nips_2022_OHkq7qNr72-" ]
nips_2022_pCrB8orUkSq
Monocular Dynamic View Synthesis: A Reality Check
We study the recent progress on dynamic view synthesis (DVS) from monocular video. Though existing approaches have demonstrated impressive results, we show a discrepancy between the practical capture process and the existing experimental protocols, which effectively leaks in multi-view signals during training. We define effective multi-view factors (EMFs) to quantify the amount of multi-view signal present in the input capture sequence based on the relative camera-scene motion. We introduce two new metrics: co-visibility masked image metrics and correspondence accuracy, which overcome the issue in existing protocols. We also propose a new iPhone dataset that includes more diverse real-life deformation sequences. Using our proposed experimental protocol, we show that the state-of-the-art approaches observe a 1-2 dB drop in masked PSNR in the absence of multi-view cues and 4-5 dB drop when modeling complex motion. Code and data can be found at http://hangg7.com/dycheck.
Accept
Pre-rebuttal, this paper had mixed reviews. Post-rebuttal, the paper had two strong supporters, A6gt and vDbH, who argued that the paper provides valuable insights into an important field, as well as a supporter dLU6, who commented in the discussion below that they are in favor of the paper (although did not update their review). The only remaining criticism comes from 2BcV. The AC does not find 2BcV's review persuasive (A6gt's comments summarize the AC's perspective well) and 2BcV did not participate in discussion. The AC is inclined to accept the paper and encourages the authors to use their extra page to integrate their responses to the reviewers.
test
[ "y6X2UEzJIpJ", "D_j-GoSB9f", "G2u-3KO6N4Q", "A_M9uad6L9", "wiBGHcuP605", "Ka3-Ozb-NLB", "vy_vt0cuN-q", "UcUugUcVq9P", "2z_9k7qbRFj", "zTqu_Xjhls7", "8EX8_O33Hzw", "Ch1bNawxkfB", "PDN1Abae72", "yPNVkq5NFJv", "H5_3FlzWouL", "Xqe-Mv7xpxx" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We’d like to thank the reviewers for the discussion. We are glad that they are now positive about the work, and would request the reviewer to kindly update their rating/review to reflect this. We will modify the text in the final version using the comments from Reviewer vDhB to help position the work and its importance.", " Thanks for your comments. I think that your (vDbH) discussion about the \"important problem\" more directly addresses my main concern than the authors' reply. As there are no technical issues in this paper, as I mentioned before, I agree to accept the paper. \n\nI would suggest the authors highlight the discussion that you (vDbH) post here in the final submission.", " I would like to thank the authors for the rebuttal. It addressed my concerns on the paper. I am in favor of an accept.\n\nThe paper tackles an important problem -- measure the difficulty in dynamic novel view synthesis dataset. All four methods (Nerfie, HyperNeRF, T-NeRF, and NSFF) try to synthesize novel view from a dynamic capture. They all achieve SOTA numbers on their own dataset. However, it is hard to compare across different datasets -- some captures are easier to reconstruct than others. The proposed EMF metrics has shown a high correlation with the reconstruction quality. With the EMF score provided for each dataset, readers can have a better understanding how good the proposed method really is.\n\nDynamic novel view synthesis is the task of both interpolation and hallucination. The method need to not only interpolate pixels from multi view cues but also hallucinate pixels for occlusion/dis-occlusion. The proposed EMF measures how much these NeRF-based methods benefits from multi view cues. \n\nI also agree reviewer A6gt's comments on other two reviews. Those two reject ratings are not supported by enough solid weakness arguments.", " We would like to thank the reviewer for the discussion, and wanted to offer some thoughts on the issues raised. \n\nIn particular, we wholeheartedly agree with the statement that “We can just encourage the future work to train models from a monocular video only. We can be more careful when reviewing papers''. Unfortunately, this does not ‘just’ happen — and papers like ours are precisely the mechanisms that encourage the community to do this. One of our key takeaways is that one should consider the ‘multi-viewness’ of the data under which a method is shown to work, and our hope is that this work provides both the motivation and an empirical means for the community to ‘be more careful’ regarding this aspect. \n\nOn the other hand, we respectfully disagree with the claim that “If the problem is that it is hard to evaluate methods on monocular videos, we can just use a multi-camera setup that is more reliable”. We do agree that using a multi-camera system for validation only is “reliable”, as we have shown in Table 2 of our paper and Table 1 in this rebuttal. Effective as it is, in practice, setting up a reliable multi-camera system in-the-wild, however, is challenging due to time-synchronization, different exposure and camera calibration. Our paper responds to this issue by additionally evaluating correspondences over monocular training views. In Table 2 of our paper, we have shown that the correspondence evaluation results, in absence of multi-view validation data, strongly correlate with a model’s performance in dynamic novel-view synthesis. We hope our proposal expands the types of data our community can exploit for evaluating methods.", " Thanks for the detailed reply. I can feel that the authors put a lot of effort into this paper, including paper writing, experiments, and rebuttal. Also, I agree with Reviewer A6gt that there is not some technical flaw.\n\nMy main concern is about the contribution, as mentioned by the author \"Our contribution is an empirical analysis of the existing evaluation protocols for dynamic novel-view synthesis. Our study has found that the existing training and quantitative evaluation protocols for this task have been operating in an effectively multiview regime.\" More specifically, I do not feel that the authors really solve an important problem. \n\n(1) If the problem is that the previous method uses multi-view data in training. e.g., interleaving frames, we can just encourage the future work to train models from a monocular video only. We can be more careful when reviewing papers.\n\n(2) If the problem is that it is hard to evaluate methods on monocular videos, we can just use a multi-camera setup that is more reliable. I do not think that the multi-view setup is an issue in evaluation, although I agree that a monocular training method is more promising.\n\nI still need some time to make a decision.\n\n", " Thanks for the rebuttal. That answers my questions. \n\nI've also read other reviews. For reviewer 2BcV, I agree \"real word video sequences could also contain a well proportion of high EMF data\". However, as authors write, their goal is to expose that existing methods primarily train and evaluate on high EMF data. At the same time, personally I don't think it's reasonable to reject a paper simply because of this (without mentioning any other weaknesses).\n\nFor reviewer dLU6, the primary concern is (1) the setup and (2) Masked-PSNR/EMF are simple. However, a simple approach does not mean it's ineffective. As long as it solves the problem, it's great.\n\nEspecially, I don't think these weaknesses lead to \"3: Reject\" since it should have \"technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.\" I don't find any reviewers points a technical flaw.\n\nOverall, I decide to keep my rating.", " **Q3. “The idea of Masked-PSNR is not surprising… And we may also hope that the methods can predict … regions that are not observed.”**\n\nWhile the idea of Masked-PSNR is not surprising, prior approaches have not used this criterion before for evaluation. As a reminder, prior approaches have been evaluated by interleaving frames from different cameras to ensure that all testing regions have been observed during training. Quoting the Nerfies paper: \n> “We alternate assigning the left view to the training set, and right to the validation, and vice versa. This avoids having regions of the scene that one camera has not seen.” \n\nHowever, as we have discussed, interleaving frames from different views during training leaks multiview cues, which no longer makes the evaluation monocular. Masked-PSNR goes around this issue so that no interleaving has to be done. As prior approaches have always quantitatively trained/evaluated with interleaving, our proposal to use Masked-PSNR for evaluation on this task is novel. We will incorporate this discussion in the paper.\n\nWe agree that it is interesting for methods to evaluate prediction and synthesis of regions that are not observed during training. Here we include more results on our iPhone benchmark when testing with and without masking unseen parts during training. We train all models with depth and sparsity regularization.\n\n| | PSNR with masking | PSNR without masking |\n|-----------|:-----------------:|:--------------------:|\n| T-NeRF | 16.4 | 9.5 |\n| Nerfies | 16.9 | 9.1 |\n| HyperNeRF | 16.7 | 9.3 |\n| NSFF | 14.0 | 9.2 |\n\n**Table 2.** Evaluating the state of the art on the proposed iPhone benchmark with and without masking unseen regions during training.\n\nNotice that the PSNR significantly degrades for the regions that are not observed during training. We will include these results and discussion in the experiment section of our paper. \n\n**Q4. “Although Nerfies trained models on images sampled from two cameras, the method is still able for a single-camera setup.”**\n\nIt is true that Nerfies can train on a single-camera input. However, they do not report a quantitative evaluation for the single-camera setup without interleaving frames from the two views during training. Thus, we argue that their training/evaluation setup does not qualify as “monocular”.\n\nOur work is the first to show a “monocular” training/evaluation protocol. We have also benchmarked previous methods before and after fixing the interleaving issue during training using our evaluation protocol. We have observed that there is a huge performance gap in Table 1 (this rebuttal) to Reviewer vDbH-Q2, aligning well with what we have found on our iPhone benchmark.", " We appreciate your thoughtful feedback. We respond to your comments below.\n\n**Meta point. “If the authors propose a new method, it is good to evaluate it in the proposed datasets and evaluation metrics. However …, not significant enough … for presentation in NeurIPS.”**\n\nOur contribution is an empirical analysis of the existing evaluation protocols for dynamic novel-view synthesis. Our study has found that the existing training and quantitative evaluation protocols for this task have been operating in an effectively multiview regime. Our study was inspired by impactful empirical studies for other tasks, such as object detection [1], object level single-view 3D reconstruction [2], and metric learning [3]. We hope that our work better calibrates the recent results on this task. \n\n1. An empirical study of context in object detection. Divvala *et al.*, CVPR 2009.\n2. What Do Single-view 3D Reconstruction Networks Learn? *Tatarchenko et al.*, CVPR 2019.\n3. A Metric Learning Reality Check. *Musgrave et al.*, ECCV 2020.\n\n**Q1. “Using a multi-camera setup is more reliable than single-camera setup for performance analysis. After all, the goal is to evaluate methods instead of training models…We do not always need to capture new data.”**\n\nWe agree that a multi-camera setup is more reliable than a single-camera setup for evaluating dynamic novel-view synthesis performance. Note that we also include a multi-camera evaluation in Table 2 of our paper. \n\nWhile we agree that the goal is to evaluate methods, we wish to point out that one of the main messages of our paper is that prior methods also effectively train in a multiview setting while claiming that they reconstruct from a “monocular”/single-view capture. Our contribution is not merely in proposing new data, but to expose this fact and propose a metric (EMF) in which future methods can quantify the difficulty of their training/evaluation setup based on the relative camera and object motion. \n\nTo further motivate the need for capturing new data, please note that prior work trained and evaluated on data that often depicted objects that have slow motion. For example, please refer to the “peel-banana”, “tail”, “chicken”, and “toby-sit” sequences from the Nerfies/HyperNeRF captures on our supplemental webpage. This finding is one of our main motivations for capturing new data with larger and more complex motions in daily scenarios. As shown on the supplementary webpage, our “teddy” sequence depicts complex deformation of the toy body; our “block” and “wheel” sequences depict articulated motion. Our new captures augment existing benchmarks with larger object and smaller camera motions. We will add these points to the paper.\n\nFurthermore, we propose to evaluate correspondence by PCK-T and our dataset also comes with the labels to run this analysis, which has not been done before. Indeed when trained on real monocular capture sequences, these methods do not work as well. In the final version, we will also include the result with and without interleaving on existing datasets (our Table 1 for Reviewer vDbH-Q2 in this rebuttal), where the performance goes down without interleaving. \n\n**Q2. “The factor of the multiview effect is just a simple trick, and there exist many other variants if we want. … Not significant.”**\n\nWe agree that our EMF formulation is simple, and allows quantifying that the existing training/evaluation has been operating in an effectively multiview regime. The novelty and importance of our work is in exposing issues in the current experimental protocol and pointing out the need for metrics like EMF. This observation has been overlooked by the community, and we hope that our findings can help instantiate better evaluation practices going forward.", " Thanks for your feedback. We appreciate that you find the need for EMF to characterize the (multiview) difficulty of the data, and that you see the need for PCK-T correspondences for evaluating deformation based approaches. We will add a discussion on limitations in the final paper. Please see our responses below.\n\n**Q1. “How do you select keypoints for each training sequence?”**\n\nFor sequences of human and quadrupeds (dogs or cats), we annotate keypoints based on the skeleton defined in the COCO challenge [1] and AnimalPose [2]. For sequences that focus on more general objects (like “block” or “teddy”), we manually identify and annotate 10 to 20 trackable points across frames. We will add these details in the final version.\n\n1. Microsoft COCO: Common Objects in Context. Lin *et al.*, ECCV 2014.\n2. Cross-Domain Adaptation for Animal Pose Estimation. Cao *et al.*, ICCV 2019. \n\n**Q2. “Might be better for a benchmark track.”**\n\nThanks for your suggestion. While we provide a dataset with low multiview cues, the main goal of our work is to point out the issues (or the discrepancies between the claims and the evaluation protocol) of recent works in this domain in the spirit of papers like [1,2,3], where we believe the main track serves this purpose better. \n\n1. An empirical study of context in object detection. *Divvala et al.*, CVPR 2009.\n2. What Do Single-view 3D Reconstruction Networks Learn? *Tatarchenko et al.*, CVPR 2019.\n3. A Metric Learning Reality Check. *Musgrave et al.*, ECCV 2020.\n\n**Q3. “Limitations are not discussed.”**\n\nWe will add a discussion on limitations in the final paper. As also mentioned to Reviewer 2BcV-Q2, our work only addresses the effect of relative camera and object motion in evaluating dynamic novel-view synthesis from a monocular video. There are other factors that affect the difficulty of the task, such as scene appearance, lighting conditions, difference between training and testing views, and type of object/scene deformation. These factors are important and beyond the scope of our paper.\n", " Thanks for your comments. We are glad that you have found our metrics to be innovative. Please refer to our responses to your questions below.\n\n**Q1. “Real world video sequences could also contain … high EMF data. So the argument that low EMF datasets are better for evaluation may not hold unconditionally.”**\n\nWe want to clarify that our message is not that low EMF captures are always better. Instead, our goal is to expose that existing methods primarily train and evaluate on high EMF data (with interleaving cameras), while claiming “monocular capture”. This setup does not reflect everyday videos captured with a single hand-held camera (e.g., a smartphone). The contribution of our paper is to quantify the relative camera and object motion (“multiview-ness”) of a monocular capture, and to suggest future works to report the EMF on newly captured data, as supported by Reviewer vDbH.\n\n**Q2. “The difficulty of a dataset may also come from other aspects.”**\n\nAgreed. There are many other factors that might affect the difficulty of a capture. In our work, we mainly focus on the effect of camera and object motion and show it is a significant factor, which has not been revealed before. A non-exclusive list of other factors affecting the difficulty of a capture might include scene appearance, lighting conditions, difference between training and testing views, and type of object/scene deformation. We will add this discussion in the main paper. \n\n**Q3. “Do high EMF sequences always have better performance?”**\n\nWe have reported a new experiment in Table 1 (this rebuttal) to Reviewer vDbH-Q2 where we disabled the interleaving cameras of the data and trained all methods with a single camera (the same sequence but with lower EMF). The performance drops for all methods by a large margin. This finding suggests that high EMF sequences will always have better performance, when all other aspects are fixed, e.g., scene appearance, lighting conditions, difference between training and testing views, and type of object/scene deformation.\n\n**Q4. “Limitation section.”**\n\nThanks for your feedback. We will add a discussion about limitations in the final paper. Our EMF formulation only quantifies the impact of relative camera and object motion to the evaluation of dynamic novel-view synthesis from a monocular video. Other factors, like scene appearance and lighting conditions, can also affect the evaluation, which are also important and beyond the scope of our paper.\n", " Thanks for your feedback and appreciation. We are glad that you find our EMF is a nice metric to have in the future dataset release, and that you see our Masked PSNR and Correspondence as smart design choices for synthesized image evaluation. Below we will address your comments.\n\n**Q1. “Camera angular velocity is high due to switching between cameras. What if you only train with one camera, will Nerfies still work? It will have lower EMFs.”**\n\nYour intuition is correct that interleaving between cameras leads to high EMFs. We also experimented with training on a single camera with lower EMF using the seven sequences from Nerfies and HyperNeRF benchmarks. Table 1 (this rebuttal) shows that all methods suffer a significant performance drop (2dB in PSNR and 0.1 in LPIPS) when disabling interleaving. These results validate our argument that existing interleaving training schemes leak multiview cues. We plan to include this table as well as more visualization results into our next revision.\n\n| | With interleaving: $\\Omega$ = 2.18, $\\omega$ = 166°/s | Without interleaving: $\\Omega$ = 0.79, $\\omega$ = 21°/s | Performance gap percentage | |\n|-----------|:-----------------------------------------------------------:|:---------------------------------------------------:|:----------------------------:|---|\n| | PSNR (dB) ↑, LPIPS↓ | PSNR (dB) ↑, LPIPS↓ | PSNR (dB) ↑, LPIPS↓ | |\n| Nerfies | 22.2, 0.239 | 20.0, 0.352 | -10%, +47% | |\n| HyperNeRF | 21.9, 0.241 | 19.7, 0.353 | -11%, +46% | |\n| NSFF | 25.2, 0.342 | 23.3, 0.512 | -8.0%, +50% | |\n\n**Table 1.** Evaluating the state of the art on existing Nerfies and HyperNeRF benchmarks with and without interleaving between frames during training.\n\n**Q2. “Correspondence metrics lack generalizability …only methods that explicitly model motion can calculate the PCK-T score. It does not apply to methods like T-NeRF. ”**\n\nWe agree that the correspondence metric only applies to methods that explicitly reason about motion. Yet we want to argue that correspondence is an important aspect of non-rigid reconstruction [1,2] and is the foundation of many downstream applications such as content editing [3]. So we encourage future methods to explicitly model and evaluate it. As also pointed out by Reviewer A6gt, prior works focus on evaluating photorealism and overlooks the fact that photometric supervision may lead to erroneous correspondences (see Figures 1-3 in the supplemental materials). The correspondence accuracy will be captured by the PCK-T scores (see Table 2 in the main paper).\n\n1. Optimal Step Nonrigid ICP Algorithms for Surface Registration. Amberg *et al.*, CVPR 2007.\n2. DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time. Newcombe *et al.*, CVPR 2015.\n3. TAVA: Template-free Animatable Volumetric Actors, Li *et al.*, ECCV 2022\n\n**Q3. “No view direction or appearance encoding.”**\n\nWe turned them off since we noticed that models overfit and perform worse on our iPhone sequences. If requested, we’re happy to add results with the view-direction and appearance encoding enabled. For example, the models in Table 1 (this rebuttal) have both turned on, as per the default training recipes from the original code release. We find in practice that turning on the view-direction or appearance encoding does not affect our conclusions.\n\n**Q4. “Limitations are not discussed. The authors can discuss the use case of EMF and two new metrics more.”**\n\nThanks for your suggestion; we will incorporate this feedback into our paper. Concretely, we will add a discussion that our EMF only addresses the effect of relative camera and object motion in evaluating novel-view synthesis from a monocular video, while in practice there are other factors like scene appearance and lighting conditions, which we leave for future works. We will also add a discussion that evaluating with Maksed PSNR and PCK-T requires scene depth acquisition and keypoint annotation. The former may not always be available and the latter requires human annotation. We hope to alleviate these issues by publicly releasing our tools for capturing and labeling.", " We would like to thank all the reviewers for their thoughtful comments. While we address the specific concerns raised in detail in individual comments, we would like to take this opportunity to highlight why we believe our paper would significantly benefit the community.\n\nA growing number of recent works have addressed (neural) reconstruction of dynamics scenes. While we have seen several impressive algorithmic innovations in this area, we believe that the commonly prevalent evaluation practices have led to a disconnect between the empirically validated setups (effectively-multiview captures) and the claimed applications (e.g. reconstruction of generic scenes from casual monocular videos). We view our work as providing a first systematic means of characterizing this discrepancy (via the proposed EMF characterization) and highlighting the corresponding performance implications.\n\nWe of course agree that our work is not the final word on how one should analyze the difficulty of the learning setup, but strongly believe that our EMF characterization, along with suggested practices for evaluation in monocular setups as well as a representative dataset will serve as a starting point for the community. In particular, we hope that like papers [1,2,3] which helped move their respective fields forward, our work can help instantiate better experimental practices for future works in our field.\n\n1. An empirical study of context in object detection. Divvala *et al.*, CVPR 2009.\n2. What Do Single-view 3D Reconstruction Networks Learn? Tatarchenko *et al.*, CVPR 2019.\n3. A Metric Learning Reality Check. *Musgrave et al.*, ECCV 2020.\n", " This paper studies the effective metrics for evaluating recent works on novel view synthesis from monocular videos. The proposed effective multi-view factors (EMF) measures multi-view signals in the evaluation of view synthesis. It also comes with a new dataset which tries to mitigate the multi-view signals in captures. Two new metrics are proposed to measure the quality of view synthesis and motion. Several state-of-the-art methods are further improved, but still struggle in the cases of motion and few multi-view signal. # Strengths\n- EMF\n\nThe effective multi-view factors measure the amount of ''multi-viewness'' in the capture. Such ''multi-viewness'' makes the task of novel view synthesis on dynamic scenes easier as the problem degrades into a multi-view setup. EMF consists of scene motion and camera angular velocity. And higher value in either suggests there is a strong multi-viewness in the capture. EMF basically tells how easy/difficult the capture is to reconstruct. Such metrics is nice to have in the future dataset release.\n\n- Masked PSNR and Correspondence\n\nTwo new metrics are proposed to further measure the quality of the synthesized image. Both are tailored for dynamic scenes, but I see them also useful in static scenes as well. Masked PSNR only calculates PSNR in the valid region, and PCK-T tells whether the predicted motion is correct. The design choices are smart.\n\n# Weaknesses\n\n- Correspondence\n\nI see masked PSNR can be applied to any method as long as the ground truth pose and depth are available. However, the correspondence metrics lacks generalizability. As shown in the paper, only methods that explicitly models motion can calculate the PCK-T score. It does not apply to methods like T-NeRF.\n\n- Camera angular velocity\n\nNerfies and HyperNeRF have a high camera angular velocity in Tbl. 1. As far as I see, this is because it is switching frames between two cameras. If the captures are reordered in a way that uses frames from camera 1 first and camera 2 second (or only frames from camera 1), will Nerfies still work? If so, it will have a much smaller velocity, thus less EMF. It would be interesting to see how the method performs vs. EMF.\n\n- No view direction or appearance encoding\n\nIt is mentioned in L297 both are turned off during training. But the goal of PSNR_M is to evaluate the quality of NVS in seen region. In practice, view direction and appearance encoding are essential in synthesizing new views. Turning them off will hurt the PSNR_M score a lot. Is there a specific reason doing so aside from overfitting?\n\n- Typo\nL32 agnitude -> magnitude See above Limitations are not discussed. The author can discuss the use case of EMF and two new metrics more.", " This paper studied the problem of dynamic 3D scene synthesis from monocular video sequence and found flaws of over-representation of slow-moving objects with a fast-moving camera in existing datasets. The authors then proposed a new metric called effective multi-view actors (EMF) to quantify the amount of multi-view signal in the image sequence. The authors also introduced a new dataset with very low EMF and argued that the new dataset should be more suitable for evaluation of dynamic 3D scene synthesis methods. Finally the authors evaluated four representative algorithms on the new dataset and find performance gap not being noticed with previous existing datasets. Strengths:\n- The authors delve into the characteristic of existing dataset and managed to produce innovative metric to evaluate the difficulty of dataset in terms of monocular dynamics.\n\nWeaknesses\n- While the ability of build 3d representation from monocular dynamics is desirable, real word video sequences could also contain a well proportion of high EMF data. So the argument that low EMF datasets is better for evaluation may not hold unconditionally. The difficulty of dataset may also come from other aspects, such as shape complexity and surface property of objects. When evaluate on different sequences with different EMF value, do we always find lower loss for high EMF sequences on existing methods? The authors answered YES for the question “Did you describe the limitations of your work” but not stating which section contains it.", " The paper does a complete review for existing approaches recovering dynamic 3D scenes from monocular videos, especially Nerfies, HyperNeRF and NSFF. It studies the camera trajectory, proposes a metric Effective Multiview Factors (EMF) to quantify multi-view cues in a dynamic scene with moving cameras. Existing datasets typically have high multi-view cues. Therefore, a new dataset captured by iPhone is introduced with little multi-view cue. Additional metrics such as Masked PSNR and PCK-T are also introduced to ensure the fairness of evaluation. The new benchmark brings additional challenges for existing approaches of dynamic 3D capture. Strengths:\n\n- Quantifying multi-view cues using EMF is neat. Different datasets in dynamic 3D capture proposes different data including different camera trajectories. And EMF quatifies the difficulties of all the data.\n- I also like PCK-T correspondence besides normal PSNR. Right now a lot of novel view synthesis approaches focus on PSNR but PSNR does not directly reflect how well the model understands the 3D world. PCK-T is more explicit and I think reporting both masked PSNR and PCK-T is a good idea.\n- Supp webpage is fantastic. Thank you!\n\nWeaknesses:\n\nI don't see any weaknesses, although I think the paper is more like a new benchmark -- It might be more suitable for the benchmark track.\n\nAdditional comments (no need to address):\n\n> Page 8 footnote: We find that this code base performs better than the original code release\n\nmissing a period. > (L213 - 215) Each training sequence is annotated with 10 to 20 keypoints in every 10 frames\n\nHow do you select these keypoints?\n Limitations are not discussed in the paper.", " This paper proposes a dataset of monocular videos to evaluate non-rigid novel view synthesis methods. A factor for measuring the multi-view effect is proposed, and two evaluation metrics are proposed. Strengths:\n\nThe paper is well-organized, and it is easy to read.\n\nThe datasets with evaluation metrics are proposed.\n \nSeveral methods of non-rigid novel view synthesis are evaluated on the proposed datasets. \n\n\n\nWeakness:\n\n1. Using a multi-camera setup is more reliable than single-camera setup for performance analysis. After all, the goal is to evaluate methods instead of training models, I don’t think that it is the drawback of existing methods. We do not always need to capture new data for non-rigid novel view synthesis evaluation, so it is difficult for me to understand the strengths of the proposed solution.\n\n2. The idea of Masked-PSNR is not surprising. First, if our goal is to evaluate the regions that are observed, it would be straightforward to mask other regions out. Second, we may also hope that the methods can predict and synthesize the regions that are not observed in training data. In this scenario, the mask should not be used. Overall, I don’t think that the masking is novel.\n\n3. Although Nerfies trained models on images sampled from two cameras, the method is able to train models using a single-camera setup. \n\n4. The factor of the multi-view effect is just a simple trick, and there exist many other variants if we want. It is correct, but I don’t think that it is sufficiently significant in this problem. \n\nPost rebuttal,\n\nAfter discussion, I agree with Reviewer vDbH, and I would strongly suggest the authors use the comments by vDbH in the final submission to position the work.\n If the authors propose a new method, it is good to evaluate it in the proposed datasets and evaluation metrics. However, from my perspective, the contribution is not significant enough for presentation in NeurIPS. yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4, 4 ]
[ "D_j-GoSB9f", "G2u-3KO6N4Q", "PDN1Abae72", "wiBGHcuP605", "vy_vt0cuN-q", "2z_9k7qbRFj", "UcUugUcVq9P", "Xqe-Mv7xpxx", "H5_3FlzWouL", "yPNVkq5NFJv", "PDN1Abae72", "nips_2022_pCrB8orUkSq", "nips_2022_pCrB8orUkSq", "nips_2022_pCrB8orUkSq", "nips_2022_pCrB8orUkSq", "nips_2022_pCrB8orUkSq" ]
nips_2022_jSorGn2Tjg
Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures
Antibodies are immune system proteins that protect the host by binding to specific antigens such as viruses and bacteria. The binding between antibodies and antigens is mainly determined by the complementarity-determining regions (CDR) of the antibodies. In this work, we develop a deep generative model that jointly models sequences and structures of CDRs based on diffusion probabilistic models and equivariant neural networks. Our method is the first deep learning-based method that generates antibodies explicitly targeting specific antigen structures and is one of the earliest diffusion probabilistic models for protein structures. The model is a "Swiss Army Knife" capable of sequence-structure co-design, sequence design for given backbone structures, and antibody optimization. We conduct extensive experiments to evaluate the quality of both sequences and structures of designed antibodies. We find that our model could yield competitive results in binding affinity measured by biophysical energy functions and other protein design metrics.
Accept
This is a very exciting and timely paper that eleganlty enables CDR sequence-structure co-design, sequence design given a certain backbone, and antibody optimization. The reviewers and AC all appreciate the extensive feedback provided by the authors and the additional studies included in the supplements. We strongly encourage the authors to also incorporate in their manuscript certain points made in their feedback. In particular please include comments to - contrast the proposed approach with (i) the work "Iterative Refinement Graph Neural Network for Antibody Sequence-Structure Co-design" and (ii)neutralization prediction approaches - highlight the limitations of docking algorithms and the pertinent future work direction of generating antibody orientations for antigens - clarify various points, such as description of Figure 1.
train
[ "GoPhnphp6w", "ToM4S0PHJIn", "saZNNC5zryk", "T1rH8mSdctM", "kbnjaU4sqH", "UyCKO5jIrWNW", "U-jiuHMHlCko", "_XsZRYOtyr", "64q5oVGC0f0", "kkNqs5DuFQT", "al5S-1vHUc39", "24poOHqLJeT", "AMytTHJwwq8", "0Ck8zXfinzA" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nThanks for addressing all of my comments! I increased my score to “weak accept”.\n\n[Q2] I agree with you that MSA based approaches such as AlphaFold2 or RosettaFold are less accurate for predicting CDR loops due to often insufficient homologs for building a reliable MSA. However, recent approaches such as OmegaFold, Nanonet, or IgFold enabled more accurate predictions and it remains unclear if your approach performs better. I believe that you can strengthen your manuscript by including the discussed baseline approach that first generates the antibody sequence conditioned on the antigen, and then uses OmegaFold or AlphaFold2 to predict the structure.\n\n[Q3] Thanks for your description and additional experiment. Please describe the differences between Jin et al and your approach in the related work section and refer to your additional experimental results.\n\n[Q4] I agree with you that not conditioning the generation on FWRs would require aligning and grafting generated CDRs on the parent structure as also performed by Jin et al. However, it would reveal if doing so improves the quality of generated CDRs.\n\n[Q5/Q6] Please describe your runtime measurements in the manuscript. Also highlight that the performance of your model depends less on the length of FWRs, which can be long.\n\n[Q9] Thanks for explaining. I would appreciate if you could summarize the RMSD of the relaxed and unrelaxed structures of each method in a table . This will show how much the structures of each methods are changed by relaxing them.\n\n[Q11]: Please describe this in the main text.\n\n[Q12]: Please also interpret figures in the main text.\n", " Dear authors,\n\nThank you for taking the time to provide additional information in the author response. This has addressed some of my main concerns, and I have updated the review accordingly to increase the score. Please do include the additional information here (especially about limitations of docking algorithms, antigen overlap, and the importance of the orientation) into the final version of the paper -- I believe the additional information will help clarify the manuscript.", " We thank the reviewer for the valuable comments that help us imporve our work!\n\nWe have added more architecture details (number of parameters, number of encoder layers, number of attention heads, *etc*.) to **Section E.5** of the updated supplementary material.\n\nIn summary, the sizes of our model and the baselines are about the same as they share the same encoder architecture. As for the inference time, our model takes longer time because the diffusion process requires running the whole network at every step. Our model also takes slightly longer time to train due to the forward diffusion process at each training step.\n\n", " Dear authors,\n\nThank you for the additional clarifications. About the responses you provided:\n\n[Q1] Agree with the benefits of the structure-based approach in terms of increased interpretability and advantages in the low data regime. While there are many viruses for which few antigens are available (eg., emerging viruses early in a pandemic, under-studied viruses), there are also several viruses for which there are many antibody sequences available. In these instances, neutralization predictors seem to be very compelling options. It does not take anything away from your contributions, but perhaps something to express a bit more clearly in the text to emphasize the situations in which your method will be most compelling Vs prior alternatives.\n\n[Q2] Makes sense - thank you. \n\n[Q3] Thank you for running these additional experiments. Could you please add architecture details (eg., number of parameters, layers, attention heads) for the two new baselines? How do they compare to your model in terms of total # of params and training / inference time?\n ", " **[Q10] Double-stranded antibodies.**\n\nYes. We would generate them independently in this case.\n\n----\n\n**[Q11] Why does IMP% in Table 3 decrease with increasing t?**\n\nTo optimize an antibody, we first perturb it for $t$ steps and then denoise it for another $t$ steps. Therefore, higher $t$ indicates more aggressive sampling and exploration of a larger space around the initial antibody. Exploring in a larger space leads to a lower probability of finding an antibody that has better binding energy than the original antibody (lower IMP%), and the optimized antibodies are less similar to the original one (higher RMSD and lower SeqID).\n\n----\n\n**[Q12] About the figure.**\n\nFigure 1 shows the complementarity between the antigen (red) and the generated CDRs (blue). \nIn sample 1, the sidechains of the antigen ``embrace’’ the CDR without any clashes. Hence, the CDR geometrically fits the best to the antigen among the 3 samples, and it has the lowest binding energy.\nIn sample 2, the CDR fits less tightly than the CDR in sample 1, but it is still a good fit. Therefore, it is binding energy is a bit higher.\nIn sample 3, there is a significant gap between the CDR and the antigen, which indicates bad complementarity. Therefore, the binding energy is the highest among the 3 samples.\n\n", " **[Q6] Scalability to long sequences.**\n\nThe length of CDR-H3 in therapeutic antibodies mostly ranges from 10 to 15. Our testset contains 5 short (5\\~10 residues), 10 medium (11\\~15 residues), and 5 long (16\\~20 residues) CDR-H3s, which are sufficiently representative for most therapeutic antibodies. As for the other regions of an antibody, they can be considered constant when we design CDRs, and they are not the major factor of antigen-binding. Therefore, we believe it is reasonable to focus on generating only CDRs in our setting.\n\n----\n\n**[Q7] About the augmented training dataset.**\n\nThe results about the performance of models trained with/without the augmented dataset are put to the **Section E.3** in the supplementary material.\n\nAccording to the results, our main finding is that the contribution of the augmented dataset is most significant on CDR-H3s.As CDR-H3 is the most variable region, using extra training data would help the model generalize better and leads to better structure accuracy and sequence recovery.Other CDR regions are conservative, so learning only from antibody structures is somewhat sufficient to model these non-versatile regions.\n\n----\n\n**[Q8] Accuracy of sidechain orientations.**\n\nOur diffusion model does not explicitly generate sidechains. It generates the orientation of amino acids which determines the direction to which the sidechain stretches.\nTo evaluate the accuracy of orientation, for each antibody in the testset, we fix the sequence and sample only the structure using our diffusion model (this is actually the CDR loop structure prediction task). We compare the generated orientations to the ground truth orientations and measure the angle error. The average angle error of the first axis in the orientation matrix is 19.7 degrees, and the average angle error of the second axis is 20.6 degrees.\n\nActually, in addition to determining sidechains, the orientations also determine the position of backbone atoms other than $C_\\alpha$. Therefore, backbone RMSDs can also reflect the accuracy of orientations in a more intuitive way. \nWe sample only the structure using our diffusion model with the sequence fixed, and calculate the backbone RMSDs of different CDRs: \n\n| CDR | H1 | H2 | H3 | L1 | L2 | L3 |\n| ---- | ----- | ----- | ----- | ----- | ----- | ----- |\n| RMSD | 1.09Å | 0.69Å | 2.56Å | 1.76Å | 0.60Å | 1.17Å |\n\nWe can see that our model predicts CDR backbone structures well, and this reflects the prediction accuracy of amino acid orientations. \n\n----\n\n**[Q9] Influence of AMBER and Rosetta relax.**\n\nThe purpose of AMBER is to repair imperfect bond lengths and bond angles which would result in unrealistic high energy. For example, the ideal length of the N-C peptide bond is 1.329Å. This is a strict restraint and even small deviation (e.g. 0.1Å) from it would lead to a significant penalty by the energy function. If we don’t apply AMBER to idealize bond geometry, these penalties would dominate the energy value, making the energy value nonsensical. In this case, the ranking results are dominated by how ideal the bonds are rather than the binding between the antibody and the antigen.\n\nTo see this concretely, we compute the binding energies of structures before AMBER-Rosetta relaxation and after relaxation. Before relaxation, the average range (max - min) of binding energies is 3573.38. After relaxation, the average range is 77.15. Clearly, the energy values before relaxation greatly diverge.\n\nAs for Rosetta-relax, it is a standard step in the Rosetta scoring protocol [7]. It adjusts the structure to eliminate clashes and makes it favorable to the Rosetta energy function. It is reported that applying relaxation before scoring is important for getting energy values that correlate well with experimental results [7].\n\nNote that, we applied the same AMBER-Rosetta protocol for all the models (including our model and baselines) to ensure fair comparison.\n\nIn fact, the AMBER-Rosetta relaxation process does not change the structure much. The average RMSD of CDR between the relaxed and unrelaxed structures is 1.31Å. Specially, for the most variable CDR-H3, the average RMSD of CDR between the relaxed and unrelaxed structures is 1.71Å.\n\n[7] Conway, Patrick, et al. Relaxation of backbone bond geometry improves protein energy landscape modeling. Protein Science 23.1 (2014): 47-55.", " **[Q2] About sequence-based models.**\n\nWe have considered this option when conducting the experiments to evaluate our model. However, we found the comparison is unfeasible and therefore dropped this option.\nIt has been shown that AlphaFold2 has its limitations. AlphaFold2 can accurately predict single-chain proteins, protein complexes with paired Multiple Sequence Alignments (MSAs) [4] and a portion of protein complexes without paired MSAs [5]. However, it generally struggles to predict antibody-antigen complexes [6]. The underlying reason is that AlphaFold2 has to rely on the coevolution information provided by MSAs. However, antibodies are produced by B-cells in response to the antigenic stimulations, which have no homologous sequences in any other species. \nTherefore, if we predict antibody sequences based on an antigen sequence, it is most likely that we cannot get a reasonable structure of the antibody-antigen complex. Consequently, we can hardly estimate binding energies and RMSDs that rely on the predicted antibody-antigen structures.\n\nThe reviewer might think that we can still use a neutralization predictor like Jin et al. to evaluate generation quality. However, a neutralization predictor is only applicable for a narrow class of antigens (e.g. SARS CoV) and training the predictor requires a lot of known antibody sequences that are effective to the antigen. Requiring a lot of known antibodies would significantly limit the ability to design antibodies for new antigen targets that have only a few known effective antibodies. In general, this is not transferable to other antigens and clearly deviates from the goal of our work: designing antibodies for an arbitrarily given antigen structure.\n\n[4] Evans, Richard, et al. Protein complex prediction with AlphaFold-Multimer. BioRxiv (2021).\n\n[5] Gao, Mu, et al. AF2Complex predicts direct physical interactions in multimeric proteins with deep learning. Nature communications 13.1 (2022): 1-13.\n\n[6] Yin, Rui, et al. Benchmarking AlphaFold for protein complex modeling reveals accuracy determinants. Protein Science 31.8 (2022): e4379.\n\n----\n\n**[Q3] About the model by Jin et al.**\n\nWe find that our model and the model by Jin et al. are hard to compare directly.\nThe goal of our model is to design antibodies for an arbitrarily given antigen structure. To achieve this goal, our model learns how residues interact with each other in the 3D space and generates CDR residues that fit (interact with) the antigen surface directly in the 3D space.\nIn contrast, the model by Jin et al. does not generate CDRs that fit a specific antigen structure. It mainly learns the sequences and structures of antibody CDR alone. It can target a specific antigen but this is done via a non-transferable sequence-based neutralization predictor, rather than learning how amino acids interact with each other in the 3D space. \nThe neutralization predictor is not transferable because it requires a lot of known antibody sequences that are effective to the antigen to train on. When there are a few known antibody sequences, which is often the case, we can hardly obtain a reliable predictor. \nTherefore, the capability of the model by Jin et al. is limited in this sense and cannot be directly applied to our more general setting.\n\nNonetheless, though the model by Jin et al. is not directly comparable, we still include baselines that share the same methodology and similar architectures with it (**Section E.4** in the supplementary material). They are trained on the same augmented dataset. We believe they are sufficient to show our model’s advantages.\n\n----\n\n**[Q4] About conditioning on antigen structures and antibody framework regions.**\n\nOur goal is to generate CDRs for an arbitrary antigen structure. Antigen structures are the input in the setting considered in our work. Ablating them leads to a different research topic. \nThe antibody framework regions are required for building a full antibody-antigen complex. If we remove the framework, we would eventually need to graft the CDR back to the framework. Therefore, we chose to kept the antibody framework and directly generate CDRs on it.\n\n----\n\n**[Q5] Training and inference time.**\n\nThe training time of our model and the baseline models are close, as they share the same architecture except for the output layers. They are all trained for 300K iterations. It takes 95h24m for our diffusion-based model, 51h45m for the autoregressive baseline, and 51h33m for the simple transformer baseline that generates CDR in one-shot. \n\nThe sampling time for a CDR with 10 residues varies across different models. It takes 8.94secs for the diffusion-based model, 0.12secs for the autoregressive baseline, and 0.30secs for the transformer baseline.\nThe diffusion-based model takes most time because the model is run for 100 times to go through the whole diffusion process. The autoregressive model’s inference time is dependent on the CDR length and 10 residues need 10 runs.", " We thank the reviewer for the valuable comments and below is our response to the questions.\n\n----\n\n**[Q1] Comparison to transformer and auto-regressive baselines.**\n\nAs suggested by the reviewer, we set up two additional baselines for comparison: (1) the first baseline *autoregressively* generates CDR sequences and predicts structures simultaneously; (2) the second baseline generates sequences and structures in one-shot, without iterative processes. These two baseline models share the same transformer-based network architecture except for their output layers. They are trained on the same augmented training dataset. More details about the baselines and experimental results are put to **Section E.4** in the updated version of the supplementary material.\n\nClearly, our proposed model achieves better scores than the baselines. We think it is not surprising for the following reasons: \nAutoregressive models generate residue sequentially and shortsightedly. That means a residue generated halfway might limit the quality of the final generation result.. \nAs for the transformer-based model that generates sequences and structures in one-shot, it is unlikely that the network generates a perfect structure within only one step. \nIn contrast to these two methods, diffusion models generate structures by iteratively refining them. The iterative refinement process can not only eliminate flaws gradually, but also attend to global information. This is what autoregressive models and one-shot models cannot do and makes diffusion-based models more competitive.\n\nWe would like to comment more on why diffusion-based models are more suitable for this task by referring to the recent advances in machine learning for biology and chemistry. \nOne of the core ingredients of diffusion-based models is its iterative refinement process. This iterative principle has been shown a more effective way to generate biological and chemical 3D structures, e.g. 3D molecules [1,2], and molecular conformations [3]. The most prominent one is AlphaFold2, whose structure module generates protein structures by repeatedly refining them. \nDifferent from image or text generation, molecular structure generation demands high precision. For example, a 10 unit deviation in the RGB-color space might not affect human perception of an image. However, a 1Å deviation in molecules might lead to significant change. Therefore, iterative approaches including diffusion models that excel at eliminating flaws and considering global information are a better choice.\n\n[1] Hoogeboom, Emiel, et al. Equivariant diffusion for molecule generation in 3d. International Conference on Machine Learning. PMLR, 2022.\n\n[2] Satorras, Victor Garcia, et al. E (n) Equivariant Normalizing Flows. Advances in Neural Information Processing Systems. 2021.\n\n[3] Xu, Minkai, et al. Geodiff: A geometric diffusion model for molecular conformation generation. ICLR, 2022.\n\n", " **[Q] About the contribution of the three components (sequence, Ca, orientation).**\n\nFor the sequence-structure codesign task, the three components are equally important because we need to model both sequence and structure ($C_\\alpha$ & orientation).\nIf we fix the sequence and only generate structures ($C_\\alpha$ and orientation), the problem becomes CDR structure prediction (loop modeling). Fixing the structure ($C_\\alpha$ and orientation) and generating sequences is fix-backbone protein design.\n\nRepresenting the structure using only $C_\\alpha$ coordinates is not sufficient. The orientation is used to implicitly represent the coordinate of other backbone atoms including N, C, and O. Therefore, we think both $C_\\alpha$ and orientation are indispensable for generating structures.\n\n----\n\n**[Q] About the GNN baseline.**\n\nThe GNN-baseline is antigen-independent. It directly conditions on an arbitrarily given antigen structure. It only shares a similar methodology with the model by Jin et al. We have added more details about the GNN baseline in the **Section E.4** of the updated supplementary material.\n\n----\n\n**[Q] Comparison to IgFold.**\n\nWe find it very hard to compare our model to IgFold mainly for the following reasons: First, IgFold predict antibody structures based on known sequences, but in our setting, antibody sequences are unknown and need generation. Second, IgFold can only predict the structure of antibody alone, rather than antibody-antigen complexes. However, we consider how to design an antibody that binds to the antigen.\n\n(2/2)", " We thank the reviewer for the valuable comments and below is our response to the questions.\n\n----\n\n**[Q] About general cases where bound complexes are unknown.**\n\nWe used HDOCK to generate antibody orientations, but we found that HDOCK does not always give good predictions. HDOCK outputs dozens of docking structures but a large portion of them are incorrect, e.g. non-paratope regions are bound to the antigen. Therefore, we have to manually select reasonable structures and this makes it difficult to conduct large scale evaluation. \nWe also considered using ClusPro but the server does not allow submitting a large bulk of jobs in a short time, and it is much slower than HDOCK due to the server’s long job queue. For AlphaFold2, it has been shown that it struggles to predict antibody-antigen complex structures [1], so it is not an ideal choice for generating antibody-antigen template, either. Therefore, we believe generating antibody orientations for antigens is a challenging problem which might be a good direction for future work, and future efforts would eventually fill the gap in a neat way.\n\nAlthough we did not fully address the cases where bound complexes are unknown, our main setting where bound complexes are known are practical in some real scenarios.\nConsider a case where we have an antibody-antigen complex structure and we would like to optimize the binding affinity. If mutating several residues on CDRs does not produce desirable results, we could use the proposed model to aggressively redesign the CDRs.\n\n[1] Yin, Rui, et al. Benchmarking AlphaFold for protein complex modeling reveals accuracy determinants. Protein Science 31.8 (2022): e4379.\n\n----\n\n**[Q] About additional baselines.**\n\nWe have conducted experiments with two more baselines:\n\n(1) the first baseline *autoregressively* generates CDR sequences and predicts structures simultaneously; (2) the second baseline generates sequences and structures in one-shot, without iterative processes. These two baseline models share the same transformer-based network architecture except for their output layers. They are trained on the same augmented training dataset. More *details* about the baselines and experimental *results* are put to **Section E.4** in the updated version of the supplementary material.\n\nClearly, our proposed model achieves better scores than the baselines. We think it is not surprising for the following reasons: Autoregressive models generate residue sequentially and shortsightedly. That means a residue generated halfway might limit the quality of the final generation result. As for the transformer-based model that generates sequences and structures in one-shot, it is unlikely that the network generates a perfect structure within only one step. In contrast to these two methods, diffusion models generate structures by iteratively refining them. The iterative refinement process can not only eliminate flaws gradually, but also attend to global information. This is what autoregressive models and one-shot models cannot do and makes diffusion-based models more competitive.\n\n----\n\n**[Q] About antigen overlap between train and test splits.**\n\nThe testset contains 20 antibody-antigen complexes. 5 of them do not have antigens that are similar (sequence identity >= 50%) to any antigen in the training set.\nAlthough the remaining 15 complexes contain antigens that appear in the training set, their binding interfaces are different from the training sets. This thanks to that we clustered antibodies according to CDR similarity. Consequently, two antibodies from different clusters bind to the same antigen at different regions and poses.\n\nWe quantify the difference in binding interfaces between the training set and test set by the following method: For each antigen in the test set, we find all the similar antigens (sequence identity >= 50%) in the training set. For each antigen pair (an antigen from the testset, and a similar antigen from the training set), we identify residues interacting to the antibody on the two antigens according to their antibody-antigen complexes. Then, we align the two antigens and compute the ratio of common interacting residues as the binding interface similarity.\nThe average binding interface similarity between the testset and the training set is 17.6%, and the max similarity is 61.0%. Therefore, we believe our training set and test-set are split properly in terms of antigen overlaps.\n\n(1/2)", " We thank the reviewer for the valuable comments and below is our response to the questions.\n\n----\n\n**[Q1] About neutralization predictors.**\n\nYes. A neutralization predictor is trained for some specific class of antigens (e.g. SARS CoV). It requires a lot of known antibody sequences that target the antigen. When we do not have many known antibody sequences for a new antigen, which is often the case, we can hardly obtain a reliable neutralization predictor. Therefore, it is not suitable for antigens that have only a few known effective antibodies.\n\nAnother drawback of neutralization predictors is that it is a black-box. It regresses antibody sequences on neutralization or binding affinities to a specific antigen using neural networks. Therefore, it does not have explicit knowledge about how the amino acids on the antibody and antigen interact with each other in the 3D space, which is fundamental to the antigen-binding behavior of antibodies. Lacking interpretability undermines the reliability of neutralization predictors.\n\nIn contrast, our model is *structure-based*. It learns to generate amino acids that interact with the amino acids on the given structure (antigen). This is driven by the general rule of how amino acids interact physically, which is independent of antigen types. Therefore, our model is generalizable to arbitrary antigens in this sense. \n\nIn addition, our model generates CDRs bound to the antigen in the 3D space. This enables structural (RMSD) and biophysical (binding energy) analysis, interpretation, and evaluation which requires 3D structures. However, a black-box sequence-based model does not provide such interpretability.\n\n\n----\n\n**[Q2] About the effect of augmented datasets.**\n\nThe results about the performance of models trained with/without the augmented dataset are put to the **Section E.3** in the supplementary material.\n\nAccording to the results, our main finding is that the contribution of the augmented dataset is most significant on CDR-H3s.\nAs CDR-H3 is the most variable region, using extra training data would help the model generalize better and lead to better structure accuracy and sequence recovery.\nOther CDR regions are conservative, so learning only from antibody structures is somewhat sufficient to model these non-versatile regions.\n\n----\n\n**[Q3] About the contribution of different components.**\n\nAs discussed in our response to Question 1, our model is structure-based. To achieve the goal of generalizability to arbitrary antigens, we chose to directly condition our model on antigen structures. Removing antigen structures would lead to another setting. \n\nTo reveal the contribution of our model design, we have conducted experiments with two more baselines:\n\n(1) the first baseline *autoregressively* generates CDR sequences and predicts structures simultaneously; (2) the second baseline generates sequences and structures in one-shot, without iterative processes. These two baseline models share the same transformer-based network architecture except for their output layers. They are trained on the same augmented training dataset. More *details* about the baselines and experimental *results* are put to **Section E.4** in the updated version of the supplementary material.\n\nClearly, our proposed model achieves better scores than the baselines. We think it is not surprising for the following reasons: Autoregressive models generate residue sequentially and shortsightedly. That means a residue generated halfway might limit the quality of the final generation result. As for the transformer-based model that generates sequences and structures in one-shot, it is unlikely that the network generates a perfect structure within only one step. In contrast to these two methods, diffusion models generate structures by iteratively refining them. The iterative refinement process can not only eliminate flaws gradually, but also attend to global information. This is what autoregressive models and one-shot models cannot do and makes diffusion-based models more competitive.\n\n", " This paper introduces a diffusion-based model for jointly modeling the sequence and structure of complementarity-determining regions (CDRs) of antibodies, conditioned on antigen structure and antibody framework. The method enables the design of CDRs in a wide range of scenarios (from least constrained to most): CDR sequence-structure co-design, fixed-backbone sequence design, or antibody optimization (starting from a known antibody). The approach deviates from prior work notably via the conditioning on the antigen 3D structure and factoring in the side-chain orientations. Experiments demonstrate superior performance in the aforementioned design settings. Lastly, authors curate an extended set of protein structures for model training: antibody-antigen from SAbDab and pseudo-CDRs from the PDB (assimilating loops and the corresponding chains they interact with as proxy antibody-antigen structures). **Strengths**\n- Significance: the ability to design novel antibodies that optimally bind to target antigens, while satisfying other constraints is critical to various antibody therapies\n- Originality: this work makes solid contributions in proposing a model that 1) handles certain design settings more naturally (eg, antibody optimization) 2) performs better than existing baselines (based on experiments reported in this work) 3) models the entire CDR structures with atomic precision through the incorporation of side-chain geometry.\n- Clarity: the paper is very well written and structured. Figures and notations are very clear. \n- Quality: sound and thorough experimental design. \n\n**Weaknesses**\n- Quality: a few minor claims / points are not fully substantiated (see Questions) Lines 92-93 \"It relies on an additional antigen-specific predictor to predict the neutralization of the designed antibodies, which is hard to generalize to arbitrary antigens\". Could you please clarify what you meant here regarding difficulty to generalize Vs your method? Do you mean this approach requires to train a new predictor for each new antigen (Vs your method handles any antigen by construction)? If so, isn't the antigen-specific predictor more likely to provide good results in the focus application it's been design for? Or do you have evidence the more general modeling you are proposing is both more flexible and has higher task performance?\n\nLines 274-275 -- \"We find that using the augmented training dataset could enhance the performance on the test set\". Did you train a model without the pseudo-CDR structures to give a sense of how useful this data extension is? \n\nTables 1/2: do you have a sense for the relative contributions to performance lift from the different modeling decisions you made over prior baselines (eg., conditioning on antigen structure, diffusion-based modeling, factoring in side-chain orientations, rotation & translation equivariance)\n\nMinor points:\n- Line 513 -- dead reference - Limitations are very briefly discussed in the conclusion. \n- Potential negative societal impact of the work is not discussed (developed method may be re-purposed more broadly for other protein design tasks which may be malignant).", " This paper addresses the problem of antigen-conditional antibody design. While there has been a line of work using ML for antibody design, existing work mostly focused on antigen-independent antibody generation, yet antigen-conditional antibody design has been far more common in practical use cases. This paper makes progress on an important problem.\n\nTwo main contributions of this paper are:\n1) A new curated dataset of \"antibody-like\" loops interacting with other protein chains, extracted from PDB.\n2) A new approach for antigen-conditional antibody design (joint sequence-structure design) based on diffusion models. Strengths:\n- (Originality & Significance) This paper considers antibody design conditioned on the antigen 3D structure. This has been an important yet challenging task, and this is one of the first ML approaches to do so.\n- (Originality) The authors curated a new dataset of antibody-like complexes (loops in contact with another chain). I believe this dataset will be useful for the research community.\n- (Clarity) This paper is mostly clearly written with high quality figures, although the writing could be improved at places (e.g. typos and more accurate language about CDRs).\n- (Quality) The empirical results seem sound, demonstrating improvements in three different antibody design tasks.\n\nWeaknesses:\n- (Significance) While the paper takes a step forward in antigen-conditioned antibody design, there is still a notable gap between the scenario in the paper and the real use case: In a real use case, we do not know the relative orientation between the antibody and the antigen, while in the design tasks in the paper the orientation is given based on known antigen-antibody complexes. The paper does address this concern in Section 4.5 by using HDOCK to dock the antibody template as a precursor step. However, this is only done for one antigen. in my view, this paper would be a lot stronger if the authors can show the design performance in Sections 4.2-4.4 both with and without known bound complexes. \n- (Quality) This paper would also be strengthened by more ablation studies on the relative merits of different model components, as well as by the inclusion of simpler baselines.\n- (Quality) The split in this paper is based on CDR sequence identity, and not based on antigen structures. It would be helpful to at least show statistics of antigen overlap (e.g. RMSD) between train and test splits. Major questions/suggestions:\n1. Among the three components (sequence, Ca, orientation), which one is more important to model? Ablation studies would make the paper a lot stronger.\n2. The GNN baseline in all results is antigen-independent? If so it'd be greatly appreciated to mark that (e.g. add a column in tables to say antigen dependent/independent), because otherwise it's misleading to directly compare the two.\n3. How many antigens are shared between the train and test splits? It would be important to at least show statistics of antigen overlap (e.g. min RMSD or % overlap) between train and test splits.\n4. How would the performance degrade if using HDOCK instead of known complexes for all the main results (sections 4.2-4.4)? \n5. Comparison to antibody structure prediction methods (e.g. IgFold) for RMSD?\n\nOther questions:\n1. Does \"side chains\" mean \"Cb\" specifically in the paper? It would be helpful to clarify whether it's all atoms in side chains or just Cb.\n2. Compared to using equal probabilities for all AAs in Eq. 1, would it be helpful to use e.g. BLOSUM or other priors?\n3. How is the posterior derived from Eq. 1?\n4. Is minimizing the expected KL (Eq. 4) equivalent to maximum likelihood estimation (MLE)? \n5. It'd be helpful to include simpler baselines, e.g. IMGT alignment consensus of antigen-specific antibodies. Although I expect simpler baselines to be worse, it'd still be helpful to put results into context.\n6. It would be more convincing for antibody optimization to also show results on antibody DMS datasets -- unless it's hard to evaluate likelihood in diffusion models? See above (\"weaknesses\") for limitations. The authors did acknowledge the main limitations and I appreciate that as a reader. ", " The paper presents a diffusion model for jointly generating the CDR sequence and structure of an antibody conditioned on its framework regions and a target antigen. Unlike existing methods, the presented method enables i) conditioning the generation on the antigen structure instead of only framework regions, and ii) predicting side-chain orientations. Lastly, the paper also presents a dataset of experimentally validated antibody structures iii) augmented by pseudo-antibody structures derived from the structure of non-antibody proteins. ## Strengths\n* The paper is mostly clearly written\n* The use of a diffusion-based model for generating antibodies is new\n* The application of antibody design is important, e.g. for drug development\n\n## Weaknesses \n* The benefit of the diffusion-model over alternative generative models is not evaluated experimentally\n* The accuracy of predicted side-chain orientations without refinement by the Rosetta packing algorithm is not evaluated experimentally\n* The benefit of augmenting the dataset by pseudo-antibody structures is not evaluated experimentally\n* Important baselines are missing\n ## Questions\n1) Does the diffusion model perform better than alternative models such as an autoregressive model (e.g. Jin et. al) or Transformer? Can you show this by ablation? Only change the architecture while using the same (augmented) dataset for training and tuning hyper-parameters appropriately.\n\n2) How does the proposed method compare to a simple sequence-based, non-iterative, model that generates CDR regions conditioned on the sequence (not the structure) of framework regions and the antigen, and predicting the structure with AlphaFold?\n\n3) Please compare to Jin et. al (http://arxiv.org/abs/2110.04624) also in section 4.2 and section 4.4. by training the model on the same augmented dataset of tuning hyper-parameters appropriately.\n\n4) Please show by ablation the effect of i) conditioning on framework regions and ii) the antigen structure.\n\n5) What is the training and inference time of the diffusion model compared to an autoregressive model (e.g. Jin et. al)?\n\n6) How does the model scale to long sequences, which is important for generating long CDRs or also framework regions?\n\n7) Does augmenting the training dataset by pseudo-structures improve the performance of the diffusion model and baseline model? You are referring to results in the appendix. However, I did not find any results. Since this is a major contribution of your paper, results should be in the main text.\n\n8) What is the accuracy of predicting side-chain orientations before refining them by Rosetta?\n\n9) What is the influence of applying the Rosetta packing algorithm and AMBER on the final structure? Does it change the ranking of methods?\n\n10) How do you apply the method to double-stranded antibodies? Do you generate the two different chains independently? \n\n11) Why does IMP% in Table 3 decrease with increasing t?\n\n12) What is the message of figure 3 and figure 4? Consider showing these figures in the appendix. Otherwise, interpret samples and also show samples of baseline methods. Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "kbnjaU4sqH", "64q5oVGC0f0", "T1rH8mSdctM", "al5S-1vHUc39", "UyCKO5jIrWNW", "U-jiuHMHlCko", "_XsZRYOtyr", "0Ck8zXfinzA", "kkNqs5DuFQT", "AMytTHJwwq8", "24poOHqLJeT", "nips_2022_jSorGn2Tjg", "nips_2022_jSorGn2Tjg", "nips_2022_jSorGn2Tjg" ]
nips_2022_QLGuUwDx4S
DropCov: A Simple yet Effective Method for Improving Deep Architectures
Previous works show global covariance pooling (GCP) has great potential to improve deep architectures especially on visual recognition tasks, where post-normalization of GCP plays a very important role in final performance. Although several post-normalization strategies have been studied, these methods pay more close attention to effect of normalization on covariance representations rather than the whole GCP networks, and their effectiveness requires further understanding. Meanwhile, existing effective post-normalization strategies (e.g., matrix power normalization) usually suffer from high computational complexity (e.g., $O(d^{3})$ for $d$-dimensional inputs). To handle above issues, this work first analyzes the effect of post-normalization from the perspective of training GCP networks. Particularly, we for the first time show that \textit{effective post-normalization can make a good trade-off between representation decorrelation and information preservation for GCP, which are crucial to alleviate over-fitting and increase representation ability of deep GCP networks, respectively}. Based on this finding, we can improve existing post-normalization methods with some small modifications, providing further support to our observation. Furthermore, this finding encourages us to propose a novel pre-normalization method for GCP (namely DropCov), which develops an adaptive channel dropout on features right before GCP, aiming to reach trade-off between representation decorrelation and information preservation in a more efficient way. Our DropCov only has a linear complexity of $O(d)$, while being free for inference. Extensive experiments on various benchmarks (i.e., ImageNet-1K, ImageNet-C, ImageNet-A, Stylized-ImageNet, and iNat2017) show our DropCov is superior to the counterparts in terms of efficiency and effectiveness, and provides a simple yet effective method to improve performance of deep architectures involving both deep convolutional neural networks (CNNs) and vision transformers (ViTs).
Accept
Both reviewer Fzo6, reviewer VWPn and reviewer 5eq4 have concerns and been questions regarding equation 5. Please clarify the clarifications on the paper and add intuition and more discussion of Eq. 5. The paper and comments from the authors indicate that dropout base regularizations are effective (Maxdropout, Maxout and Decov also outperforms GCP). This does mean that a large part of the benefits of the proposed method (no inference processing, lower complexity) are the result of that dropout/regularization in training, which takes away from the contributions. Overall, I think the paper is borderline, leaning to acceptance, as the proposed DropConv does out perform other dropout/regularization methods and I believe the paper might benefit the community. I'd strongly encourage the authors to review their manuscript and address the reviewers’ concerns as best as possible in the revised manuscript.
train
[ "hYxrbNCEApM", "lwA2Pin-DEU", "nKTIpsVcVL", "YuEhvamr4oq", "0LK9Ncl_n6f", "uvQAFPfdmRF", "5yysqDOh1Uh", "mkK0-YOZNA", "1CUngiee1q", "2WqE6OvkFR_" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " [Q1]: For the experiments in Table 2, the counterpart of DropChannel seems analogous to ACD except that the probability in ACD is determined by features, while the performance gap is so significant and even higher than DropElement. Could the authors analyze the phenomenon? How does the DropChannel and DropElement perform when using different dropout probability beyond 0.5?\n\n[A]: Thanks for the comment. We experiment with DropChannel and DropElement by using different dropout ratios (please refer to [Common Q3] above). <1> In Tab. 2, DropChannel and DropElement are performed with the fixed dropout probability of 0.5. Note that for the same dropout probability, DropChannel drops about two times of elements than DropElement for covariance representations. Therefore, for small feature dimensions (i.e., $d=64$), dropout probability of 0.5 is too large for DropChannel, leading an inferior performance. As shown in [Common Q3], DropChannel (dim=64) with $\\rho=0.3$ achieves a comparable result. Besides, it is harder to select channel features for reaching the good trade-off for small feature dimension, where feature correlation is closer related to its importance. Therefore, for dim=64, DropChannel with $\\rho=0.3$ is slightly inferior to DropElement with $\\rho=0.5$. For large feature dimension (i.e., $d=256$), DropChannel with the best $\\rho$ of 0.5 is superior to the best DropElement ($\\rho=0.7$). <2> As shown in [Common Q3], ACD is superior to DropElement and DropChannel for all dropout ratios, since our ACD can adaptively determine probability $\\rho$ of dropout by balancing representation decorrelation and information preservation. These results further demonstrate the effectiveness of our ACD. \n\n\n[Q2]: It is unclear to me why computational complexity is linear for DropCov while quadratic to element-wise methods (in Table 1).\n\n[A]: Thanks for the comment. Table 1 compares computational complexity of different normalization approaches. The normalization of DropCov is to perform dropout on channel of features (i.e., which channel is dropped or not), whose complexity is linear with respect to number of channels (i.e., $d$). For element-wise normalization methods, they perform normalization on each element of covariance representations. For $d$-dimensional features $\\mathbf{X}$, dimension of covariance representations is $d^{2}$, and so the computational complexity of element-wise normalization methods is $O(d^{2})$.\n", " We sincerely thank the reviewer for recognizing the detailed and valuable analysis of MPN, and positive comments on good writing and extensive experiments. In the following, we respond carefully to the reviewer’s questions and hope our answers could address your concerns.\n\n[W1-1]: The first corollary (line 120) states that setting power parameter 0.5 is optimal according to Eq. 2 and Fig. 1, while it lacks either theoretical proof or comprehensive empirical studies. Is the behavior of power parameter fairly invariant to dataset or network backbone?\n\n[A]: Thanks for the comments. Kindly note that theoretical proof on optimal power parameter is still an open problem, while [27] shows power parameter 0.5 accounts for a robust covariance estimation. For empirical studies, both Fig. 3 (w.r.t. AlexNet on ImageNet) in [27] and Fig. 2 (w.r.t. VGG-VD on three small-scale datasets) in [28] show 0.5 is the best choices of $\\alpha$, and they have similar trends on accuracies for various $\\alpha$. Additionally, we conduct more experiments (please refer to [Common Q2] above) to observe effect of $\\alpha$ of MPN, which show $\\alpha$ of MPN has a consistent behavior for different backbones. Above results suggest behavior of power parameter is invariant to dataset or network backbone.\n\n[W1-2]: How do adaptive power normalization (APN) and Eq. (5) realize in practice? Could the authors provide their training behavior if they are variable during training? \n\n[A]: Thanks for the comments. For realizing APN, we can use the same computational process with MPN [27], except choice of $\\alpha$. Given an input $\\mathbf{X}$, we first compute covariance $\\mathbf{X}^{T}\\mathbf{X}$ and obtain its eigenvalues {$\\lambda_{i}$} using eigenvalues decomposition. Based on {$\\lambda_{i}$}, our APN computes $\\alpha$ by solving Eq. (4) with a grid search strategy. Finally, we perform regular MPN using the optimized $\\alpha$. To realize APN in Eq. (5), we first compute $\\boldsymbol{\\omega}$ and $\\boldsymbol{\\pi}$ with input $\\mathbf{X}$ using the bottom part of Eq. (5). Then, $\\rho$ can be easily calculated by using the upper part of Eq. (5) with $\\boldsymbol{\\omega}$ and $\\boldsymbol{\\pi}$. Finally, we perform channel dropout on input $\\mathbf{X}$ using $\\rho$, and then compute GCP. All source code will be released once acceptance. For training behavior, please refer to [Common Q1] above for the details.\n\n[W2]: The motivation of performing dropout on convolution channels is not new. The main difference is specializing the strategy with global covariance pooling.\n\n[A]: Kindly note our DropCov is motivated by the finding in Corollary 1, and try to achieve good trade-off between representation decorrelation and information preservation more efficiently. To our best knowledge, we make the first attempt to develop dropout-based method (i.e., ACD) for normalizing GCP. Besides, our ACD is clearly different from existing dropout methods (e.g., maxdropout and DropConnect) those require manual tuning of probability $\\rho$, ACD has the ability to adaptively determine probability $\\rho$ of dropout for reaching a good trade-off between representation decorrelation and information preservation, while being clearly superior to channel dropout with fixed dropout ratios (see Tab.2 and [Common Q3] above). \n\n[W3]: DropCov brings significant increase on parameter size and computational overhead.\n\n[A]: We would like to clarify that DropCov actually achieves better trade-off between performance and model complexity, while bringing affordable computational complexity especially inference speed. As shown in Tab. 5, ResNet-50 and ResNet-101 with DropCov respectively achieve better performance than ResNet-101 and ResNet-152 with GAP, while having much less model complexity (e.g., 32.0M parameters and 6.19G FLOPs of ResNet-50+DropCov vs. 44.6M parameters and 7.57G FLOPs of ResNet-101). Besides, as shown in Tab. 6, ResNet-101 with GCP is much lightweight and better than ResNet-152 with GAP on iNat2017. Additionally, the results in Tab. 4 show our DropCov with $d=64$ (73.5\\%) achieves clear gains over GAP (70.2\\%) using nearly inference time (1.04$\\mu$s vs 0.97$\\mu$s), where only extra <1.0M parameters and 1.0 GFLOPs are introduced. It indicates DropCov with low-dimensional GCP (e.g., $d=64$) still brings clear improvement over GAP, while introducing moderate computational cost. Finally, since our DropCov performs no normalization during inference, it achieves comparable inference speed with GAP (refer to table in https://github.com/36f857fe/InferSpeed), especially for strong ResNet-101 and ViT models.\n", " We sincerely thank the reviewer for positive comments on strength of experiments, good presentation, and interesting results about analysis of MPN. In the following, we respond carefully to the reviewer’s questions and hope our answers could address your concerns.\n\n[W1]: I think the main drawback of this work is the significance. I can acknowledge the improvement is clear, the idea is simple. But as GCP is computationally more expensive than 1st order pooling, not much attention is paid to the field.\n\n[A]: Thanks for the comment. We kindly note deep GCP has been successfully adopted to various tasks (e.g., fine-grained/general image classification, video recognition, ReID, few-shot learning and Graph NN), and a mount of works have been published on top-tier conferences and journals (a brief summary could be found in a public website: https://saimunur.github.io/spd-archive/ under the umbrella of \"symmetric positive definite matrices\"). It indicates GCP has attracted a lot of research interests. Meanwhile, we would like to clarify GCP actually achieves better trade-off between performance and computational complexity than 1st order pooling (GAP). Specifically, as shown in Tab. 5, ResNet-50 and ResNet-101 with DropCov respectively achieve better performance than ResNet-101 and ResNet-152 with GAP, while having much less computational complexity. The similar phenomena also are observed for ViT models. Besides, as shown in Tab. 6, ResNet-101 with GCP is much lightweight and better than ResNet-152 with GAP on long-tailed iNat2017. Additionally, since our DropCov performs no normalization during inference, it achieves comparable inference speed with GAP (refer to table in https://github.com/36f857fe/InferSpeed), especially for strong ResNet-101 and ViT models. In conclusion, previous works have shown that GCP is a very competitive option in a variety of visual tasks, as compared to GAP, and our DropCov provides a more efficient yet effective solution. \n\n\n[W2]: Effect of power on different sizes of datasets with different capacities of models.\n\n[A]: Thanks for the comments. Please refer to [Common Q2] above for more results.\n\n\n[W3]: Improvement of APN is quite marginal & extra visualization\n\n[A]: Kindly note our APN is proposed for verifying the claim in Corollary 1. As stated in line 284-285, average values of $\\alpha$ achieved by APN are nearby 0.5, which further account for why 0.5 is the widely used choice of $\\alpha$ for MPN. Besides, APN brings 0.1%-0.2% gains over MPN with $\\alpha=0.5$ by considering effect of inputs. We would like to clarify they are non-trivial gains on ImageNet over strong MPN, and the recent works [r1,r2] also bring similar gains (0.1%-0.3%) over MPN. These results verify the conclusion in Corollary 1. Furthermore, we pick up some samples with large and small $\\alpha$ achieved by APN on validation set of ImageNet-1K, which are shown in an anonymous URL: https://github.com/36f857fe/Visualization. From them we can observe the samples containing simple objects and more redundant information have small $\\alpha$, where MPN tends to representation decorrelation. On the contrary, the samples involving less redundant information (e.g., scene) have large $\\alpha$, where MPN tends to information preservation. Such these phenomena show the consistency with our finding.\n\n[r1] Why Approximate Matrix Square Root Outperforms Accurate SVD in Global Covariance Pooling? ICCV, 2021\n\n[r2] Improving Covariance Conditioning of the SVD Meta-layer by Orthogonality. ECCV, 2022\n\n[W4]: Discussion on Eqn (5) & range of $\\rho$ in Eqn (5) & dropout in Eqn (6)\n\n[A]: Thanks for the comments . <1> Please refer to [Common Q1] above for more discussions on Eqn (5). <2> In practice, we restrict $\\rho$ in Eqn (5) by using $\\max(0, \\rho)$ and $\\min(1, \\rho)$. <3> Thanks for pointing out this potential ambiguity, and we will modify Eqn (6) to $\\mathbf{z}=\\textrm{V}(\\mathbf{Y}^{T}\\mathbf{Y}), \\mathbf{Y}=\\delta_{\\rho}(\\mathbf{X})$. \n\n\n[W5]: Experiments with different dropout ratios\n\n[A]: Thanks for the comments. Please refer to [Common Q3] above for more results.\n\n\n[W6]: The intuition behind “element” and “channel” dropout. \n\n[A]: Thanks for the comment. The “element” dropout preforms dropout on elements of covariance representations based on values of elements ($[\\mathbf{X}^{T}\\mathbf{X}]_{ij}$), which indicates correlation between $i$-channel feature and $j$-channel feature. Therefore, “element” dropout only considers feature correlation. For “channel” dropout, it preforms dropout on feature channels based on channel weights, which is obtained by attention module of ACD (i.e., $\\boldsymbol{\\omega}$). Therefore, “channel” dropout only considers feature importance. Based on above discussion, “element” dropout and “channel”dropout can be regarded as one type of dropouts is equivalent to considering only feature correlation or feature importance, respectively. We will add above explanations in the revision.\n", " We sincerely thank the reviewers for the valuable comments. Here, we response to three common questions and hope our answers could address your concerns.\n\n[Common Q1]: More discussion of equation (5)\n\n[A]: For realizing our ACD, Eqn (5) aims to adaptively decide probability $\\rho$ of channel dropout for reaching a good trade-off between representation decorrelation and information preservation, where $\\frac{D}{\\log(d)}$ and inner product of <$\\boldsymbol{\\omega},\\boldsymbol{\\pi}$> are designed to consider effect of feature dimension ($d$) and relationship between feature correlation and feature importance, respectively. In particular, feature correlation (i.e., $\\boldsymbol{\\pi}$ computed by summarizing the elements along row of $\\mathbf{X}^{T}\\mathbf{X}$) and feature importance (i.e., $\\boldsymbol{\\omega}$ achieved by channel attention) indicate representation decorrelation and information preservation, respectively. Clearly, larger feature correlation results in stronger representation correlation, while features with larger channel weights contain more important information. Therefore, relationship between feature correlation and feature importance is good indicator for performing channel dropout. Intuitively, if feature correlation is close related to (i.e., closely decoupled with) feature importance, it is hard to select features for reaching a good trade-off between representation decorrelation and information preservation, and so we need to carefully adopt dropout for channel features under the random setting, leading to a small $\\rho$. Otherwise, we can perform dropout more safely to achieve a trade-off, and adopt a large $\\rho$. \n\nBesides, we have showed behavior of $\\rho$ during training in Fig. A2 of supplementary materials, where we observe $\\rho$ varies along training epochs, and $\\rho$ has a clear variance for all training samples at each epoch. They show ACD has the ability to adaptively determine probability $\\rho$ of dropout. Besides, small feature dimension $(d=64)$ has larger variance of $\\rho$ than large one $(d=256)$, which is caused by feature correlation is closer related to its importance for small $d$, and so it is harder to select features for reaching the good trade-off. The results in Tabs. 2 and 3 show our ACD is clearly superior to those with fixed $\\rho$, verifying the effectiveness of our ACD.\n\nWe will add more discussion of Eqn (5) in the revision.\n\n[Common Q2]: Effect of power on different sizes of datasets with different capacities of models\n\n[A]: We further experiment with ResNet-50 and ResNet-101 on ImageNet-1K to observe effect of power. Since MPN is very computationally expensive (see Tab. 4), we set $\\alpha$ of MPN to {0.1, 0.3, 0.5, 0.7, 1.0} and report the results (i.e., convergence curves) achieved before deadline in an anonymous URL: https://github.com/36f857fe/PowerMPN, where we can see that behaviors of ResNet-50 and ResNet-101 with various $\\alpha$ are consistent with those of ResNet-18 (i.e., Fig. 1 (a)), verifying our analysis on MPN again. Besides, we kindly note that both Fig. 3 (w.r.t. AlexNet on ImageNet) in [27] and Fig. 2 (w.r.t. VGG-VD on three small-scale datasets) in [28] show 0.5 is the best choices of $\\alpha$, and they have similar trends on recognition accuracies for various $\\alpha$. These results suggest $\\alpha$ of MPN has a consistent behavior for different models and various sizes of datasets. \n\nWe will report more results on effect of $\\alpha$ in the revision. \n\n[Common Q3]: 'DropElement' and 'DropChannel' with different dropout ratios\n\n[A]: We experiment with 'DropElement' and 'DropChannel' by using different dropout ratios, where we adopt exactly the same settings in Tab. 2. Due to time limitation, we set feature dimension $d$ to 64 and 256, while dropout ratios vary in the range of $[0.1, 0.3, 0.5, 0.7, 0.9]$. As compared in below table, our ACD is superior to 'DropElement' and 'DropChannel' for all dropout ratios. Particularly, the best dropout ratios are quite different for two methods (i.e., 'DropElement' and 'DropChannel') and various feature dimensions. In contrast, our ACD can adaptively determine probability $\\rho$ of dropout by balancing representation decorrelation and information preservation, while always achieving the best performance. These results further demonstrate the effectiveness of our ACD. \n\n| Method | | |Channel |Dropout | | Element |Dropout |\n| :----: | :----: | :----: |:----: | :----: | :----: | :----: | :----: |\n\n| $ρ$ | dim = 64 | dim = 256 | dim = 64 | dim = 256 |\n| :----: | :----: | :----: |:----: | :----: |\n| 0. 1 | 72.5 | 71.9 | 72.3 | 70.8 |\n| 0. 3 | 72.8 | 74.7 | 72.8 | 72.3 |\n| 0. 5 | 70.1 | 75.1 | 73.4 | 74.0 |\n| 0. 7 | 65.7 | 72.1 | 72.1 | 74.2 |\n| 0. 9 | 20.5 | 54.7 | 68.2 | 73.8 |\n| ACD (Ours) | 73.5 | 75.2 | 73.5 | 75.2 |\n", " We sincerely thank the reviewer for recognizing efficiency and effectiveness of our ACD as well as extensive analysis of MPN. In the following, we respond carefully to the reviewer’s questions and hope our answers could address your concerns.\n\n[W1&Q1]: more (theoretical) discussion of equation (5)\n\n[A]: Thanks for the comments. Please refer to [Common Q1] above.\n\n\n[W2&Q2]: clarify contribution and even rewrite the paper to make the core contribution more obvious\n\n[A]: Thanks for the comments. We would like to clarify that our core contributions consist of two parts: (1) we make the first attempt to understand effect of post-normalization on deep GCP from the perspective of model training (i.e., Sec. 2) and (2) we propose an efficient and effective DropCov method for normalizing GCP (i.e., Sec. 3). Meanwhile, these two contributions are closely related. Specifically, for the first contribution, Sec. 2.1 takes matrix power normalization (MPN) as an example, and concludes that effective post-normalization can make a good trade-off between representation decorrelation and information preservation for GCP, which are crucial to alleviate over-fitting and increase representation ability of deep GCP networks, respectively. Then, Sec. 2.2 aims to further verify the finding in Sec. 2.1 by introducing APN and extending the finding on MPN to other post-normalization approaches (e.g., LogM and EwN). Therefore, APN and analysis of LogM in Sec. 2.2 provide further support on the conclusion in Sec. 2.1, while combination of Sec. 2.1 and Sec. 2.2 provides a full view for effect of existing post-normalization methods. For the second contribution, our DropCov develops an efficient and effective ACD for normalizing deep GCP, which is strongly motivated by the finding in Sec. 2 (the first contribution), i.e., how to achieve good trade-off between representation decorrelation and information preservation more efficiently. In summary, Sec. 2 and Sec. 3 respectively describe our two core contributions, where the finding in Sec. 2 encourages us to propose the method in Sec. 3.\n\nAs suggested by the reviewer, we will further compress the space of Sec.2.2 and give more discussions on our ACD in Sec. 3 to highlight contribution of our DropCov in the revision. \n\n[W3&Q3]: comparison to other training regularization techniques that prevent overfitting\n\n[A]: Thanks for the suggestion. The regularization perspective for post-normalization of GCP is interesting, and both existing post-normalization approaches and our ACD can regarded as performing some regularization strategies on features or covariances. Particularly, GCP with structure-wise post-normalization can be rewritten as $\\mathbf{Z}=f(\\mathbf{X})f(\\mathbf{X})^{T}, s.t. \\min_{f(\\mathbf{X})}\\|f(\\mathbf{X})f(\\mathbf{X})^{T}-P(\\mathbf{X}^{T}\\mathbf{X})\\|$, where $P(\\mathbf{X}^{T}\\mathbf{X})$ are $(\\mathbf{X}^{T}\\mathbf{X})^{\\alpha}$ and $\\log(\\mathbf{X}^{T}\\mathbf{X})$ for MPN and LogM, respectively. For our ACD, $f(\\mathbf{X})=\\delta_{\\rho}(\\mathbf{X})$. But note that our work for the first time shows how do existing post-normalization approaches work for optimizing GCP networks. As suggested by the reviewer, we compare with several regularization techniques (i.e., Maxout [r1], DropConnect [r2], Decov [r3] and maxdropout [r4]) following the settings in Tab. 4. As shown in below table, our ACD clearly outperforms other methods. Although these methods can prevent overfitting, they are not good at balancing representation decorrelation and information preservation. Above results further verify the effectiveness of our ACD. In the revision, we will compare with more regularization techniques, including more variants of dropout (e.g., DropBlock [NeurIPS18]) and other weight regularization strategies.\n\n| Method | Maxout | Dropconnect | Decov | Maxdropout | ACD (Ours) |\n| :----: | :----: | :----: | :-----:| :----: | :----: |\n| Top-1 Acc(%) d=64 | 72.11 | 70.59 | 72.42 | 71.95 | 73.50 |\n| Top-5 Acc(%) d=64 | 90.56 | 89.34 | 90.69 | 90.11 | 91.36 |\n| Top-1 Acc(%) d=256| 73.67 | 72.46 | 74.01 | 70.11 | 75.20 |\n| Top-5 Acc(%) d=256| 91.37 | 90.23 | 91.55 | 88.97 | 92.13 |\n\n[r1] Maxout networks. In ICLR, 2013.\n\n[r2] Regularization of neural networks using dropconnect. In ICML, 2013.\n\n[r3] Reducing Overfitting in Deep Networks by Decorrelating Representations. In ICLR, 2016.\n\n[r4] Maxdropout: Deep neural network regularization based on maximum output values. In ICPR, 2020.\n", " We sincerely appreciate the reviewer for positive reviews on efficient method, soild experiments, good theory analysis and support to our paper. In the following, we respond carefully to the reviewer’s question and hope our answer could address your concerns.\n\n[Q1]: Is the GCP the bilinear pooling? I think it could be better to explain these two academic terms and illustrate their relationships in the paper.\n\n[A]: Thanks for the comment. GCP and bilinear pooling share very similar mathematical formulas, but there exist some differences between them. Specifically, GCP focuses on computing covariance of inputs $\\mathbf{X}$ with mean of $\\boldsymbol{\\mu}$ (i.e., $(\\mathbf{X}-\\boldsymbol{\\mu})^{T}(\\mathbf{X}-\\boldsymbol{\\mu})$), while bilinear pooling computes the outer product of inputs $\\mathbf{X}$ and $\\mathbf{Y}$ (i.e., $\\mathbf{X}^{T}\\mathbf{Y}$). When $\\mathbf{X}$ and $\\mathbf{Y}$ are shared (a most widely used case for bilinear pooling) and the inputs are zero-mean, bilinear pooling captures the same information with one of GCP. Besides, we note that element-wise signed sqrt-root followed by a $\\ell_{2}$ normalization is the default post-normalization for bilinear pooling ($\\mathbf{X}^{T}\\mathbf{X}$) in the original paper [29]. In this work, we focus on effect of different post-normalization approaches for GCP, and compared them in Table 4. In the revision, we will add above discussions to illustrate relationships between GCP and bilinear pooling.", " The main contribution of this paper is to propose Adaptive Channel Dropout (ACD), which performs channel dropout before the covariance computation to obtain good trade-off between representation decorrelation and information preservation. ACD is more efficient and more effective than existing post-normalization techniques. Since the core of ACD is dropout, it is only used during training, and is not needed in inference. There are other minor contributions, such as Adaptive Power Normalization (APN), which is a slight improvement over matrix power normalization (MPN) [34,42]. The paper also performs more analysis of MPN. Strengths\n - ACD is more effective and efficient than existing post-normalization techniques.\n - more extensive analysis of alpha for MPN\n\nWeaknesses\n - One of the main contribution is equation (5), but the intuition/explanation is not very clear. Some more theoretical discussion would also be helpful.\n - The current presentation makes it hard to understand what are the main contributions. There are many things going on: analysis of MPN, LogM. APN. ACD... My take is ACD is the main contribution, but would be helpful to make it clear in the paper\n - The main advantages of ACD seems to come from the fact that it is a dropout-based technique. The fact that its faster and is not required in inference are because it is based on dropout, which is quite different from post-norm/pre-norm. It would be helpful to compare to more regularization techniques in training. - more (theoretical) discussion of equation (5)\n - clarify contribution and even rewrite the paper to make the core contribution more obvious\n - comparison to other training regularization techniques that prevent overfitting Checklist points to the conclusion, but the conclusion does not really address the limitations and potential negative societal impact of the work.", " Motivated by the facts of \n(1) the post-normalization of GCP lacks theoretical understanding and \n(2) the high computational complexity of GCP normalization, \nthe paper proposes:\n- a theoretical analysis of matrix power normalization in the GCP networks and extend it to an adaptive normalization method\n- an adaptive channel dropout to reduce the computational complexity.\n\nThe proposed method has been verified on ImageNet and iNaturalist datasets using various scale architectures. Strengths\n- The proposed method is simple and efficient for classification tasks. \n- Experiments are solid on multiple datasets, especially have experimented with different scale architectures.\n- Good theory analysis with ablation studies. - Is the GCP the bilinear pooling? I think it could be better to explain these two academic terms and illustrate their relationships in the paper. None.", " This paper focuses on the post-normalization of the pooling strategy leveraging second-order statistics, i.e., global covariance pooling (GCP). The key contributions are twofold: 1) the paper shows that post-normalization in matrix power normalization [27] tries to balance representation decorrelation and information preservation through empirical analysis; 2) the paper proposes a linear complexity pre-normalization DropCov of GCP based on an adaptive channel dropout. \n\nThe proposed approach has demonstrated its effectiveness on several image classification benchmarks including ImageNet, ImageNet-C, ImageNet-A, stylized ImageNet, and long-tailed classification with different backbones (CNN, transformer). \n **Strength**: \n* [S1]: I think the main strength of this work is the experiments. The paper provides diverse experiments on different datasets with different backbones to validate the proposed DropCov. \n\n* [S2]: The paper is presented very well and reads very well. I read quite carefully but did not find any typos. The paragraphs and sections are also self-contained and organized well.\n\n* [S3]: The analysis of MPN (Sec.2.1) is intuitive, and the empirical results are interesting.\n\n**Weakness**:\n* [W1]: I think the main drawback of this work is the significance. I can acknowledge the improvement is clear, the idea is simple. But as GCP is computationally more expensive than 1st order pooling, not much attention is paid to the field.\n\n* [W2]: I think the analysis on MPN is a little shallow. the experiments are on a single dataset with a network. It would be interesting to see the effect of power on different sizes of datasets with different capacities of models. \n \n* [W3]: For the adaptive power normalization, the improvement with the proposed formulation is quite marginal. Moreover, I am wondering whether extra visualization would provide some insight. For example, it might be possible to visualize samples with important/small alpha to verify corollary 1 further.\n\n* [W4]: Equation 5 is complicated and not intuitive. For example, the cosine similarity between the feature importance and feature correlation is weird. Also, this similarity might be negative and cause the probability larger than 1? Equation 6 might be confused as well. The dropout should be only applied once on $X$ (not separately for $X$ and $X^T$)?\n\n* [W5]: It would be more convincing to conduct different dropout ratios (rather than a fixed ratio of 0.5) to validate the effect of adaptive dropout. \n\n* [W6]: It is not apparent to understand the intuition behind the experiments of “element” and “channel” dropout. For example, in lines 275-276, the paper emphasizes applying one type of dropouts is equivalent to considering only feature correlation or feature importance. It would be better to provide a more detailed explanation on this part. \n\n\n\n Please refer to the weaknesses.\n No potential negative societal impact.", " This paper focuses on studying matrix power normalization in global covariance pooling operation. The authors provide a detailed analysis in the effect of different power parameters on matrix power normalization as well as other post normalizations, in terms of feature decorrelation and information preservation, empirically showing the necessary of using a good tradeoff between the two properties. To reduce the computation complexity of post normalization, the authors introduce Dropconv which performs channel dropout to achieve feature decorrelation. Experiments show the ability of the proposed strategy on performance improvements on both ResNet and ViT backbones. Strengths:\n1. The authors provide a detailed analysis in the effect of the power parameter of matrix power normalization on feature decorrelation and information preservation, and consequential impact on model performance empirically, which is valuable for related research area. \n\n2. The paper is generally well written and easy to follow, although some statements need to be explained and clarified.\n\n3. Extensive experiments on representative traditional and modern backbones are conducted to validate the method. \n\nWeaknesses:\n1. Some descriptions are less rigorous and need to be clarified:\n\nThe first corollary (line 120) states that setting power parameter 0.5 is optimal according to Eq. 2 and Fig. 1, while it lacks either theoretical proof or comprehensive empirical studies. Is the behavior of power parameter fairly invariant to dataset or network backbone? \n\nFor the proposed Adaptive Power Normalization, the power parameter is optimized according to eigenvalues of feature covariance, while the features are updated during training. How does it realize in practice? There is a similar question for the dropout probability in Eq. 5. Could the authors provide their training behavior if they are variable during training? If the parameters are adaptive with model training from scratch, primitive parameters may be far from optimal values. On the other hand, if the variance of probability distribution is rather small, it may indicate less necessary for tuning it during training. \n\n2. The motivation of performing dropout on convolution channels is not new. The main difference is specializing the strategy with global covariance pooling. \n\n3. Equipping DropCov into baseline backbones (e.g., ResNet50) brings significant increase on parameter size and computational overhead. 1. For the experiments in Table 2, the counterpart of DropChannel seems analogous to ACD except that the probability in ACD is determined by features, while the performance gap is so significant and even higher than DropElement. Could the authors analyze the phenomenon? How does the DropChannel and DropElement perform when using different dropout probability beyond 0.5?\n\n2. It is unclear to me why computational complexity is linear for DropCov while quadratic to element-wise methods (in Table 1).\n The paper analyzes post-normalization for global covariance pooling unit and introduces channel dropout to improve it, which is not restricted to certain downstream cv tasks. It seems like no potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, 5, 7, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "lwA2Pin-DEU", "2WqE6OvkFR_", "1CUngiee1q", "nips_2022_QLGuUwDx4S", "5yysqDOh1Uh", "mkK0-YOZNA", "nips_2022_QLGuUwDx4S", "nips_2022_QLGuUwDx4S", "nips_2022_QLGuUwDx4S", "nips_2022_QLGuUwDx4S" ]
nips_2022_XrECTbqRCfX
Approximate Secular Equations for the Cubic Regularization Subproblem
The cubic regularization method (CR) is a popular algorithm for unconstrained non-convex optimization. At each iteration, CR solves a cubically regularized quadratic problem, called the cubic regularization subproblem (CRS). One way to solve the CRS relies on solving the secular equation, whose computational bottleneck lies in the computation of all eigenvalues of the Hessian matrix. In this paper, we propose and analyze a novel CRS solver based on an approximate secular equation, which requires only some of the Hessian eigenvalues and is therefore much more efficient. Two approximate secular equations (ASEs) are developed. For both ASEs, we first study the existence and uniqueness of their roots and then establish an upper bound on the gap between the root and that of the standard secular equation. Such an upper bound can in turn be used to bound the distance from the approximate CRS solution based ASEs to the true CRS solution, thus offering a theoretical guarantee for our CRS solver. A desirable feature of our CRS solver is that it requires only matrix-vector multiplication but not matrix inversion, which makes it particularly suitable for high-dimensional applications of unconstrained non-convex optimization, such as low-rank recovery and deep learning. Numerical experiments with synthetic and real datasets are conducted to investigate the practical performance of the proposed CRS solver. Experiment results show that the proposed solver outperforms two state-of-the-art methods.
Accept
This paper proposes a new method for solving the cubic subproblem in the cubic regularized Newton method. The propose method is simple, but works very well in practice. The numerical experiments demonstrated that the ARC algorithm combined with the proposed new subproblem solver significantly outperforms ARC with different subproblem solvers. Moreover, the accuracy of the solution generated by the proposed method can be several orders better than the one generated by other methods. Error analysis of the proposed method is also provided. Overall, this is a very nice contribution to the cubic regularized Newton method, which has the potential to accelerate this important method.
train
[ "1JNQo5N6BYY", "r51mTVIXku4", "Lhb-7O13k4", "8QMmhdaz7nQ", "h3a4lyvmct", "hrbw0UewJ_k", "3Goka3XdFJD", "OLSYTN14gtd", "SZ16x6VCVSr", "T1FgLoYC--g9", "EzQU_swpPZ", "rdJkxO2d9W", "ujErMiQJ9S8", "yUfGcTHkHV", "4znGsKepf1j", "1OL6YAbibWS", "iuj_MH2mEYE", "Q_PtvDK2Wp" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer MvBV,\n\nThanks again for your review and comments. Do you have further comments? We hope our response answers all your concerns well.\n\nWe would like to state something further.\n## Contribution\nin this paper, **our main contribution** is the proposed novel ASEM in solving cubic subproblems, theoretical analysis of ASEM (and also on gaussian random matrices), numerical validation for the observation from theory and experimental test of the performances of the proposed method on some real applications. **It is not a purely theoretical paper...**\n\n\n## Confidence\n**We strongly believe the proposed method is useful in applications, providing another choice besides the Krylov-subspace method (which is the most popular in solving cubic subproblems).**", " Dear reviewer CTtP,\n\nThanks again for your review and comments. Do you have further comments? We hope our response answers all your concerns well.\n\nWe would like to state something further.\n## Contribution\nin this paper, **our main contribution** is the proposed novel ASEM in solving cubic subproblems, theoretical analysis of ASEM (and also on gaussian random matrices), numerical validation for the observation from theory and experimental test of the performances of the proposed method on some real applications. **It is not a purely theoretical paper...**\n\n\n## Confidence\n**We strongly believe the proposed method is useful in applications, providing another choice besides the Krylov-subspace method (which is the most popular in solving cubic subproblems).**\n\n", " Thanks again for your comments. We really understand your point.\n\nWe would like to state more about our contributions and confidence. **Our paper not purely theoretical...** The proposed algorithm is novel in solving cubic subproblems.\n\n## Theory\nNumerically, gradient descent (Carmon and Duchi, 2019 [1]) is very slow... The Krylov subspace method is very a good method in real applications. The ASEM sometimes is comparable to or only slightly better than the Krylov subspace method. However, in [2], they analyze the convergence of the Krylov method, whose convergence is linear and sublinear. **However, the exact error bound should also vanish when $m=n$ for the Krylov subspace method, which is not observed in the error bounds of [2].** In the analysis of ASEM, we have such observation from the error bound, which is more appropriate.\n\n## Contribution\nIn [2], their main contribution is the theoretical analysis of the Krylov subspace method. However, in this paper, **our main contribution** is the proposed novel ASEM in solving cubic subproblems, theoretical analysis of ASEM (and also on gaussian random matrices), numerical validation for the observation from theory and experimental test of the performances of the proposed method on some real applications. **It is not a purely theoretical paper...**\n\n\n## Confidence\nWe agree that [1] and [2] provide a very strong theoretical analysis for gradient descent and the Krylov subspace method. **We strongly believe the proposed method is useful in applications, providing another choice besides the Krylov-subspace method (which is the most popular in solving cubic subproblems).**", " Thank you again for the rebuttal. I have looked at the experimental section more carefully. Indeed it is a bit surprising to me that ASEM actually performs well even with one eigenvalue. I think this may perhaps be an interesting future direction that is worth looking into. \n\nI maintain my initial opinion that the theoretical contributions are still a bit weak. The fact that the final error of the approximate secular equation depends on the distribution of the eigenvalues are still quite obvious IMO. However, based on the experiments I have raised my score by 1. ", " Thanks for your response. We would like to clarify something further.\n\n**(1)** We didn't really agree the method is obvious and it is the first work in solving cubic subproblems by ASEM. Moreover, 'obvious' does not mean a lack of contribution. We suggest solving an approximate secular equation (ASEM) rather than the exact secular equation, where the former reduces much computation than the latter in large-scale problems. Our theoretical analysis is based on the approximate secular equation and shows that more eigen information has lower error. It is within our expectations. We found from the error bound that the error depends on the **'distribution' of eigenvalues, which is interesting**. **To make it more clear, we extend the result for random Gaussian matrices, where we know the eigen distribution (please see Line 124-128, Page 4, equation (8) and full proofs are in the appendix).** Those observations are also validated in the numerical expenriments. To the best of our knowledge, it is the first work that solves the cubic subproblem by ASEM. \n\n**(2)** You mentioned that your main question is how the proposed ASEM works in real applications, compared with state-of-the-art methods and you would consider increasing evaluation. We compared the ASEM with sota methods in real application problems. Please see Experiment 5, Table 1 in section 5. Are you satisfied with this part? Thanks.\n\nWe are looking forward to your further comments. ", " I thank the authors for their response. However I still believe that that my original point holds: the proposed method is very obvious and the theoretical results are weak. I would like to keep my current score. ", " Thanks so much! We revise the sentence accordingly. Please see the updated version. ", " Thank you for your hard work in adjusting the paper. After reading your changes, I only noticed that the red sentence in lines 77-78 is grammatically incorrect. The other material seems to be fine.\n\nHence, I am maintaining my positive impression of your contribution.", " Thanks so much for your comments. **We uploaded the latest version according to your comments.**\n\nIn [1], Carmon and Duchi used gradient descent to solve the subproblem (1) in the whole space $\\mathbb{R}^{n}$; In [2] and [3], Cartis et al. approximately solved (1) in the Krylov subspace, which is a (much) lower dimensional space. In this paper, our main idea is to solve (1) by solving the approximate secular equations. **Our motivations and ideas are totally different from the state-of-the-art methods.**\n\nWe have the following answers to your questions and comments.\n\n**Answers to your questions.**\n\n**(1)** We agree that we can intuitively imagine that larger $m$ implies more accuracy and there is a tradeoff between $m$ (computation costs) and accuracy. It is a natural and obvious phenomenon for most of approximation methods in mathematics, e.g., low-rank matrix decomposition, Krylov subspace method, and approximation capabilities of deep neural networks. In [10], they solve the secular equation and get accurate solution in each step of cubic regularization method. It performs well in low-dimensional problems. However, for high-dimensional problems, the eigendecomposition for large matrices is expensive. As is shown in Table 1 (the TQUARTIC problem), we solve the $5000$-dimensional problems in less than $16$ seconds, however, the full eigendecomposition for a dense $5000 \\times 5000$ matrix may require more than $16$ seconds (but only one iteration). In this sense, the proposed ASEM makes sense. \n\n**(2)** Besides some numerical exploration on simply ASEM in solving (1), we also test the adaptive cubic regularization method (ARC), which is the most popular variant of the cubic regularization method, in solving some optimization problems in real applications. We put all details and results in **Experiment 5 and Table 1 in section 5**. **CUTEst is a library that collects various optimization problems from real applications**. For easier implementation, we don't need to know the background of application problems and directly use the CUTEst library to test the performance of optimization algorithms. \n\n**(3)** In the **Experiment 5**, we compare the proposed ARC-ASEM with ARC-CP (Cauchy point method), ARC-GD (gradient descent in **Carmon and Duchi 2019 [1]**) and ARC-Krylov (the Krylov subspace method in **Cartis et al. 2011 [3]**) in solving some optimization problems that arise in real applications (collected in CUTEst library). As is shown in Tabe 1, the proposed ARC-ASEM is significantly better than ARC-CP and ARC-GD but slightly better than ARC-Krylov. It is within our expectations as the Krylov subspace method is one of the most popular methods in computational math. However, the ARC-Krylov is sensitive to the number of subspaces but ARC-ASEM is more robust with the number of eigenvalues in real applications.", " Thanks so much and we really appreciate your careful review and constructive comments. **We uploaded the latest version according to your comments.** \n\nYou mentioned our method is comparable with the Krylov subspace method. Yes, in the synthetic cases (Figure 4), our method is better but in real problems (CUTEst problems), ARC-ASEM is comparable (slightly better) with ARC-Krylov. It is within our expectations. We don't expect the ASEM significantly outperforms the Krylov subspace method as the Krylov subspace method is the most popular method in computational mathematics. But you can see in Table 1 that ASEM(1) and ASEM(10) are always good (is robust with $m$) but the Krylov subspace method is sensitive to the number of subspaces. In Table 1, the proposed ARC-ASEM(1) is comparable to or slightly better than ARC-Krylov. In [1], Carmon and Duchi used gradient descent to solve the subproblem (1) in the whole space $\\mathbb{R}^{n}$; In [2] and [3], Cartis et al. approximately solved (1) in the Krylov subspace, which is a (much) lower dimensional space. In this paper, our main idea is to solve (1) by solving the approximate secular equations. Our motivations and ideas are totally different from the state-of-the-art methods.\n\nWe have the following answers to your questions and comments. \n\n**(1)** Sorry, it is a typo... We updated it in the current version colored in red. \n\n**(2)-(8)** Thanks for your comments, please see the updated version.\n", " **Answers to your questions and comments**\n\n**(3)** Thanks for your suggestion, we move the information of the computing platform to the beginning of the section. \n\n**(4)** We tested the cubic regularization algorithm (ARC) with the proposed ASEM, Krylov subspace method, gradient descent and the Cauchy point method in solving real optimization problems (CUTEst) and all results are shown in **Table 1, Experiment 5**. **CUTEst is a library that collects various optimization problems from real applications**. For easier implementation, we don't need to know the background of application problems and directly use the CUTEst library to test the performance of optimization algorithms. Moreover, we also test the algorithm in solving high-dimensional real problems in CUTEst ($n=5000$). ", " Thank you for your careful review and constructive comments. **We uploaded the latest version according to your comments.** We have the following answers to your questions and comments. \n\n\nI would like to briefly introduce the cubic regularization method here. For a nonconstrained nonconvex optimization problem\n\\begin{equation*}\n \\min_{\\mathbf{x} \\in \\mathbb{R}^{n}} f(\\mathbf{x}),\n\\end{equation*}\nthe main idea of the cubic regularization method is as follows. We first select an initialized point $x_0$. With the current state $x_{t}$, the next state $x_{t+1}$ satisfies the following cubic regularized problem:\n\n\\begin{equation*}\n x_{t+1} \\in \\arg \\min_{\\mathbf{x}} \\left \\langle \\nabla f(x_{t}), x - x_{t} \\right \\rangle + \\frac{1}{2} \\left \\langle x - x_{t}, \\nabla^2 f(x_t) (x - x_{t}) \\right \\rangle + \\frac{\\rho_{t}}{3} ||x - x_{t}||_2^3.\n\\end{equation*}\n\nThe first two terms are Taylor expansion of $f(x)$ at $x_t$ up to the second order and the cubic regularization term constrains that $x_{t+1}$ cannot be far away from $x_t$ to guarantee the accuracy of the expansion. The ARC Algorithm adaptively updates $\\rho_t$ based on the current state $x_{t}$. Please see Algorithm 1 (the ARC Algorithm) in the supplement or in [3]. In this paper, we aim to solve the above subproblem (cubically regularized quadratic problem) for the cubic regularization method. \n\nIn [1], Carmon and Duchi used gradient descent to solve the subproblem (1) in the whole space $\\mathbb{R}^{n}$; In [2] and [3], Cartis et al. approximately solved (1) in the Krylov subspace, which is a (much) lower dimensional space. In this paper, our main idea is to solve (1) by solving the approximate secular equations. Our motivations and ideas are totally different from the state-of-the-art methods.\n\nWe agree the most **limitation** is that the trade-off between accuracy improvement and computational reduction is not clear. It is hard to mathematically and quantitatively show the trade-off since cubic regularization of Newton's method (or its most widely used variant ARC) requires iteratively solving the subproblem (1) multiple times. As is shown in Table 1 (the TQUARTIC problem), we solve the $5000$-dimensional problems in less than $16$ seconds, however, the full eigendecomposition for a $5000 \\times 5000$ matrix may require more than $16$ seconds (but only one iteration). In this sense, the proposed ASEM makes sense. However, for low-dimensional problems (for example, $10$-d optimization problems), the full eigendecomposition is fast, so it is unnecessary to adopt ASEM or Krylov subspace method for ARC. As is shown in most of the previous works (e.g., [1] for gradient descent and [2,3] for the Krylov subspace method), more accuracy implies more costs. The trade-off between them depends on the optimization problem itself. The approximation method performs well, especially in high-dimensional problems since full eigendecomposition is really expensive while calculating $m$ eigenvalues ($m << n$) is much cheaper. \n\n\nYes, in synthetic cases, the proposed ASEM is better than Krylov (see Figure 4), but we don't expect the ASEM significantly outperforms the Krylov subspace method in solving real problems as the Krylov subspace method is the most popular method in computational mathematics. But you can see in Table 1 that ARC-ASEM(1) and ARC-ASEM(10) are always good (is robust with $m$) but the Krylov subspace method is sensitive to the number of subspaces. In Table 1, the proposed ARC-ASEM(1) is comparable to or slightly better than ARC-Krylov. In [3], the convergence of ARC is still guaranteed if we approximately solve the subproblem (1). Therefore, we may focus more on solving the subproblem. Our idea is novel and interesting.\n\n**Answers to your questions.**\n\n**(1)** If we accurately solve each subproblem of ARC (as in [10]), it could be very slow since full eigendecomposition is expensive for high-dimensional matrices. Similarly, if $m$ is large, that means the subproblem (1) is solved more accurately, then the total costs of ARC may be larger. As is shown in Table 1, ARC-ASEM(1) is better than ARC-ASEM(10) in terms of running time for convergence. Overly large $m$ for the proposed ASEM may cause 'waste' in total computation for convergence. Therefore, the selection of $m$ is important, which is related to the tradeoff between the computation and the accuracy. However, in real applications (shown in Table 1), we found that ASEM with $m=1$ is good enough. \n\n**(2)** The running time per iteration could be computed by (total running time)/(iterations), where both two are included in the table. For fairness, all hyperparameters of ARC are fixed (the same) for all experiments. Moreover, the most important parameter $\\rho$ is adaptively updated in ARC. The only parameter is the selection of $m$ and we can see in Table 1 that ASEM is not sensitive with $m$ but the performances of the Krylov subspace method differ much on the number of subspaces.", " **Line 90.** Yes, thanks for your pointing out the mistake here. It should be \"$\\mu \\geq \\lambda_{m}$\". \n\n**Line 109 and 113.**\nFirstly, the solution $\\sigma_1^{\\star}$ of the approximate secular equation and the solution $\\sigma^{\\star}$ of the secular equation must satisfies that $\\lambda_1 + \\sigma_1^{\\star} > 0$ and $\\lambda_1 + \\sigma^{\\star} > 0$. Then, by the Taylor expansion, we have\n\\begin{equation*}\n (\\lambda_i + \\sigma_1^{\\star})^{-1} = (\\lambda_i + \\sigma^{\\star})^{-1} - (\\lambda_i + \\sigma^{\\star})^{-2} \\cdot (\\sigma_1^{\\star} - \\sigma^{\\star}) + 2(\\lambda_i + \\sigma^{\\star})^{-3} \\cdot (\\sigma_1^{\\star} - \\sigma^{\\star})^2 + \\cdots.\n\\end{equation*}\nNote that $\\lambda_1 + \\sigma^{\\star} > 0$ is fixed for the given $\\mathbf{A}$, $\\mathbf{b}$ and $\\rho$ and is independent with the approximate solution $\\sigma_1^{*}$ from the proposed method. When $\\sigma_1^{\\star} - \\sigma^{\\star}$ is small enough, then dominant term in the above expansion is the first-order term. Therefore, the big $\\mathcal{O}(\\cdot)$ here is consistent with that in the mathematical literature.\n\n**Line 122-126.** We put detailed mathematical proof in the supplement due to the page limit. All results are within theoretical guarantees. For the typical random Gaussian matrix, its eigenvalues follow the so-called semi-circle law with high probabilities. As we mentioned that the proposed ASEM depends on the distribution of eigenvalues of $\\mathbf{A}$. Then we derived the explicit error bound for the ASEM for such cases. \n\n**Is it possible to improve the bound?** It is possible but may be hard. It is derived by solving approximate secular equations. The exact error bound should depend on the distribution of eigenvalues (as is shown in the numerical experiments, Figure 1). I think it is tight enough since the information delivered by the error bound is validated in the experiments (please see Figures 1, 2 and 3 in the experimental part). Also, our error bounds vanish at $m=n$, which is consistent with the exact error.", " Thanks so much for your comments. **We uploaded the latest version according to your comments.** We have the following answers to your questions and comments. \nI would like to briefly introduce the cubic regularization method here. For a nonconstrained nonconvex optimization problem\n\\begin{equation*}\n \\min_{\\mathbf{x} \\in \\mathbb{R}^{n}} f(\\mathbf{x}),\n\\end{equation*}\nthe main idea of the cubic regularization method is as follows. We first select an initialized point $x_0$. With the current state $x_{t}$, the next state $x_{t+1}$ satisfies the following cubic regularized problem:\n\n\\begin{equation*}\n x_{t+1} \\in \\arg \\min_{\\mathbf{x}} \\left \\langle \\nabla f(x_{t}), x - x_{t} \\right \\rangle + \\frac{1}{2} \\left \\langle x - x_{t}, \\nabla^2 f(x_t) (x - x_{t}) \\right \\rangle + \\frac{\\rho_{t}}{3} ||x - x_{t}||_2^3.\n\\end{equation*}\n\nThe first two terms are Taylor expansion of $f(x)$ at $x_t$ up to the second order and the cubic regularization term constrains that $x_{t+1}$ cannot be far away from $x_t$ to guarantee the accuracy of the expansion. The ARC Algorithm adaptively updates $\\rho_t$ based on the current state $x_{t}$. Please see Algorithm 1 (the ARC Algorithm) in the supplement or in [3]. In this paper, we aim to solve the above subproblem (cubically regularized quadratic problem) for the cubic regularization method. \n\nIn [1], Carmon and Duchi used gradient descent to solve the subproblem (1) in the whole space $\\mathbb{R}^{n}$; In [2] and [3], Cartis et al. approximately solved (1) in the Krylov subspace, which is a (much) lower dimensional space. In this paper, our main idea is to solve (1) by solving the approximate secular equations. Our motivations and ideas are totally different from the state-of-the-art methods.\n\nOur theoretical bound is different from some common bounds (e.g., linear convergence and quasi-linear convergence) since we derived it from solving the secular equation. For a $n \\times n$ matrix, we can get an accurate solution (the error bound should be zero) if we set $m=n$. Moreover, we have an additional requirement that $m \\leq n$, where the error bound should vanish at $m=n$ (it cannot be achieved for linear convergence). Therefore, the theoretical error bound should be different from some conventional bounds (e.g., linear convergence). In this way, our theoretical bound makes sense. \n\n**Line 41**. Yes, it is derived from Proposition 2.1 and 2.2 in [1]. I would like to give detailed proof here and will include it in the supplementary material. The global solution $x^{\\star}$ of the cubic regularized problem (1) must satisfy (2) and (3) which are first-order and second-order conditions. We multiply both two sides of\n\\begin{equation*}\n (A + \\rho ||x^{\\star}|| \\mathbf{I})x^{\\star} + b = 0\n\\end{equation*}\nby $\\mathbf{v}_1$ (the eigenvector corresponds to the minimal eigenvalues of $\\mathbf{A}$), then we have \n\\begin{equation*}\n (\\lambda_1 + \\rho ||x^{\\star}|| ) \\cdot (\\mathbf{v}_1^{\\mathrm{T}}x^{\\star}) + \\mathbf{v}_1^{\\mathrm{T}} \\mathbf{b} = 0.\n\\end{equation*}\nIf $\\mathbf{v}_1^{\\mathrm{T}} \\mathbf{b} \\neq 0$, then $\\lambda_1 + \\rho ||x^{\\star}|| $ must be non-zero (thus strictly positive). Therefore, the matrix $\\mathbf{A} + \\rho ||x^{\\star}|| \\mathbf{I} \\succ \\mathbf{0}$ is positive definite and the solution $x^{\\star} = -(\\mathbf{A} + \\rho ||x^{\\star}|| \\mathbf{I})^{-1} \\mathbf{b}$. Moreover, if $\\lambda_1 + \\rho ||x^{\\star}|| $ is positive, the solution of problem (1) is unique (please see Theorem 3.1 in [3]). \n\n\n\n\n**Line 45.** Under the condition that $v_1^{T} b \\neq 0$, we know that the problem (1) has a unique solution, and more importantly, it is also a \\textbf{unique stable point}. There does not exist another point $\\bar{x} \\neq x^{\\star}$ such that $\\nabla f_{\\mathbf{A},\\mathbf{b},\\rho}(\\bar{x}) = 0$. Without the loss of generality, we assume that $||\\bar{x}||_2 > ||x^{\\star}||$ (if $||\\bar{x}||_2 = ||x^{\\star}||$, the only solution for the first-order condition is $x^{\\star} = -(\\mathbf{A} + \\rho ||x^{\\star}|| \\mathbf{I})^{-1} \\mathbf{b}$. Then, we have $x^{\\star} = -(\\mathbf{A} + \\rho ||x^{\\star}\\| \\mathbf{I})^{-1} \\mathbf{b}$ and $\\bar{x} = -(\\mathbf{A} + \\rho ||\\bar{x}|| \\mathbf{I})^{-1} \\mathbf{b}$. \n\nHowever, $||(A + \\rho ||\\bar{x}||I)^{-1} b|| \\leq ||(A + \\rho ||x^{\\star}|| I)^{-1} b||$\n, which is a contradiction. Therefore, in numerical experiments, we use the gradient $||\\nabla f_{\\mathbf{A},\\mathbf{b},\\rho}(\\mathbf{x}) = 0||_2$ to measure the optimality, where smaller gradient means to be closer to the global minima of the problem (1). ", " In the cubic regularization method, a key step is to solve the so called secular equation which may costs O(n^3) time.\nIn this paper, the authors consider two faster approximation to the secular equation which reduce the computing time to O(n^2 m).\nThe authors give some theoretical analysis of the proposed methods.\n The proposed method reduces the computing time of secular equation, which may be novel.\nThe paper is relatively well-written and clear.\nThis paper is mostly a theoretical paper.\nIn my opinion, however, the theoretical results are not significant enough.\nIn fact, from the authors' theoretical results (Proposition 3 for example), the proposed method has an irreducible error.\nFrom the authors' experiments, it may be the case that the obtained theoretical bound is too loose.\n line 41: Proposition 2. Is this proposition adapted from Claim 2.1 in [1]? Why A + rho ||x^*|| > 0? Can you provide a proof of Proposition 2?\nline 45: \"and hence the gradient norm serves as an optimality measure\" I do not understand this.\nline 49: \"lambda_1 + simga > 0\" Should it be >= 0?\nline 90: \"for any mu\". Should it be \"for any mu > lambda_m\"?\nline 109: What does the symbol big O mean in your context? From line 113, it seems tht ||tuilde x - x*|| has no absolute upper bound, for example, suppose |lmabda_i| and simga* are very small.\nline 122 -- 126: These discussions lack mathematical rigor.\n\nFrom the theoretical view, is it possible to improve the bound? Yes", " The paper studies the cubic regularization technique and proposes two methods for approximating the secular equation based on the first-order and second-order Taylar expansions. Theoretical discussions including uniqueness and existence of the solution together with error analysis are provided. Five numerical experiments are conducted with comparable performance with the state of the art. However, the trade-off between accuracy improvement and computational reduction for these truncation-based approaches is not clear. Overall, the paper looks interesting with a variety of applications but with limited novelty. Strengths: The paper proposes to use simple Taylor approximations to approximate the secular equation, which intends to simplify the computation. Implementation details are provided, together with various numerical comparisons with the state of the art. \n\nWeaknesses: The Taylor expansion-based truncation seems intuitive and standard, which will bring inaccuracy for the approximation. It is not fully clear how this type of approximation will affect the accuracy and reduce the computational cost. The general high-order approximation could be discussed as well. In addition, in the numerical results, say Table 1, ARC-Krylov seems to outperform the proposed method in terms of running time. A comparison of computational complexity in the big-O notation could be given with theoretical discussions. 1. In line 212, what does the \"computational waste\" mean here?\n2. In Table 1, it may not be fair to simply compare the overall running time. The running time per iteration could be compared as well. In addition, does the number of iterations depend on the selection of certain parameters in all methods being compared? If yes, please discuss which parameters are sensitive and provide guidelines for tuning if available. \n3. In Section 5, it would be better to describe the computing platform and computer configurations at the very beginning of the section rather than in an unnoticeable place in Experiment 5. Regularization techniques have been widely used in solving a lot of application problems, such as inverse problems and machine learning. But the paper does not show any such application experiment, which would limit the practical use and attraction. ", " This paper proposes an efficient and general scheme for finding approximate solutions to the cubic regularization subproblem (CRS) used in the classic cubic regualization method in nonconvex optimization. The main advantage of the scheme lies in the efficient computation of the unique root in a given truncated approximate secular equation (ABE) that scales as $O(mn^2)$ where $n$ is the dimension of the underlying problem and $m < n$ is a parameter that balances the accuracy of the scheme with the overall computational cost. Numerical experiments are given to demonstrate the behavior of the scheme on different parameter choices, different kinds of problem instances, and a subset of the well-known CUTEst optimization problem dataset. **Disclaimer**: My review is limited to the material presented in the 9-page body of the paper, and does not consider the materials in the supplement.\n\n*Strengths*\n\n1. The numerical experiments are significant in their scope and quality. Specifically, they test the behavior of the proposed scheme (ASEM) with respect to its key parameter $\\mu$ and the distribution of eigenvalues in the CR subproblem. I also appreciate the inclusion of other well-known algorithms in the benchmarks, such as gradient descent, the Cauchy-point method, and the Krylov subspace method.\n\n2. The error bounds in Theorem 1 and 2 are highly appreciated, as they encapsulate the expected behavior of the scheme and do abuse asymptotic notation to hide any universal constants.\n\n3. The complexity improvement from $O(n^3)$ to $O(mn^2)$ is impactful both from a theoretical and practical point-of-view.\n\n4. The writing of the paper is both clear and concise. Moreover, the remarks following some of the more important results, e.g., Proposition 3, are both welcome and informative.\n\n*(Minor) Weaknesses*\n\n1. There are few places that could better with additional clarifying statements (see the *Questions* section below).\n\n2. There are few minor typos (see the *Questions* section below). 1. Line 36: Do you mean $V\\Lambda V^T$?\n\n2. Proposition 2: Make this slightly more self-contained, by re-iterating what $v_1$ and $x^*$ are.\n\n3. End of Section 1: A topic that might arouse more interest in the paper early on is a discussion on the choice of $m$ (which is discussed later, starting on line 211). Hence, it would be helpful to add 1-2 sentences at the end of Section 1 to say that this is a topic that will be discussed later in the paper.\n\n4. The choice of $\\mu$ in Theorem 2 does not seem to be immediately implementable, since it relies on $\\lambda_{m+1},\\ldots,\\lambda_{n}$. However, equation (13) later on gives a tractable form for $\\mu$. Perhaps a remark in (or after) the Theorem should be made to direct the reader to (13).\n\n5. Line 197: \"...if with imperfect initialization.\" Missing word?\n\n6. Line 183: Return <- Returning\n\n7. Line 289: \"The rest of the parameter...\" <- \"The remaining parameter...\"\n\n8. The words \"Krylov\" and \"Krylob\" are used interchangeably throughout. I would suggest picking only one of these spellings. The authors have sufficiently addressed all limitations (assumptions) in this paper. ", " The well-known cubic regularization of Newton's method proposed by Nesterov (2006) requires solving a \"cubic subproblem\" at each iteration:\n$\n\\min_{\\mathbf{x} \\in \\mathbb{R}^{n}} =\\mathbf{b}^{\\mathrm{T}} \\mathbf{x}+\\frac{1}{2} \\mathbf{x}^{\\mathrm{T}} \\mathbf{A} \\mathbf{x}+\\frac{\\rho}{3}\\|\\mathbf{x}\\|^{3}.\n$ \nThe cubic subproblem is equivalent to solving a \"secular equation\", which is a nonlinear equation that depends explicitly on the eigenvalues of $A \\in \\mathbb{R}^{n\\times n}$. Solvers for this secular equation run in $O(n^3)$. \n\nIn this work the authors propose two \"approximate\" secular equations. Essentially the approximate secular equations are obtained by setting a parameter $m$ and replace all eigenvalues larger than $\\lambda_m(A)$ in the secular equation with a constant $\\mu\\geq \\lambda_m(A)$. The advantage is that now we only need to compute the first $m$ eigenvalues of $A$, reducing the computational cost to $O(mn^2)$. The authors provide error analysis on the difference between the exact solution of the secular equation and the approximate solution. Furthermore, they provide numerical experiments to gauge the error of the approximate secular equation in a number of different scenarios. It is observed that the error depends on the distribution of the eigenvalues of $A$. ### Strengths\nThe ability to solve cubic subproblems efficiently is very important for the success of cubic regularization. The idea of solving an approximate cubic subproblem instead of an exact one can be useful in specific cases where the large eigenvalues of $A$ is concentrated in a small interval. The error bounds in this paper also become tighter in this regime. \n\n### Weaknesses\n- The main contribution of this paper is proposing two approximate secular equations and analyzing their errors. However, both the proposed equations and the error analysis are quite obvious and unsurprising. It basically boils down to the following idea: in the equation\n$\nw(\\sigma)=\\sum_{i=1}^{n} \\frac{c_{i}^{2}}{\\left(\\lambda_{i}+\\sigma\\right)^{2}}-\\frac{\\sigma^{2}}{\\rho^{2}} = 0,\n$\nwe can pick an index $m$ and replace all eigenvalues greater or equal to $\\lambda_m(A)$ with a constant $\\mu$, resulting in an approximate equation. The larger $m$ is, the smaller the approximation error. When $m=n$, the method is exact. The error analysis is also based on this idea. The overall theoretical contribution is weak because the tradeoff between $m$ and the accuracy is quite obvious. It is also clear that the resulting error will depend on $m$, $\\mu$ and the distribution of the eigenvalues. Using a small $m$, of course, will increase the error and reduce the running time, which is the gist of this paper. \n- The motivation for this work is that the approximate secular equations can be used to solve the cubic subproblem, resulting in more efficient iterates for cubic regularization of Newton's method. However the experimental section does not contain such an experiment where an actual optimization problem is solved with cubic regularization and the running time is compared with other methods for solving the cubic subproblem, either exactly or approximately. \n- In the end, there is neither a theoretical proof nor a convincing experimental section that demonstrate the advantage of the proposed method when it is actually used in cubic regularization. \n The paper is clear and easy to understand. My only question is how does the approximate solver work in practice when used in cubic regularization? If the authors can implement their method and compared to state of art algorithms such as Carmon and Duchi 2018, or Cartis et al. 2011 and demonstrate a clear advantage, I would consider increasing my overall evaluation. Yes. There is no direct negative societal impact. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 4, 4 ]
[ "4znGsKepf1j", "1OL6YAbibWS", "8QMmhdaz7nQ", "h3a4lyvmct", "hrbw0UewJ_k", "SZ16x6VCVSr", "OLSYTN14gtd", "T1FgLoYC--g9", "Q_PtvDK2Wp", "iuj_MH2mEYE", "1OL6YAbibWS", "1OL6YAbibWS", "4znGsKepf1j", "4znGsKepf1j", "nips_2022_XrECTbqRCfX", "nips_2022_XrECTbqRCfX", "nips_2022_XrECTbqRCfX", "nips_2022_XrECTbqRCfX" ]
nips_2022_QqWqFLbllZh
Spatial Pruned Sparse Convolution for Efficient 3D Object Detection
3D scenes are dominated by a large number of background points, which is redundant for the detection task that mainly needs to focus on foreground objects. In this paper, we analyze major components of existing sparse 3D CNNs and find that 3D CNNs ignores the redundancy of data and further amplifies it in the down-sampling process, which brings a huge amount of extra and unnecessary computational overhead. Inspired by this, we propose a new convolution operator named spatial pruned sparse convolution (SPS-Conv), which includes two variants, spatial pruned submanifold sparse convolution (SPSS-Conv) and spatial pruned regular sparse convolution (SPRS-Conv), both of which are based on the idea of dynamically determine crucial areas for performing computations to reduce redundancy. We empirically find that magnitude of features can serve as an important cues to determine crucial areas which get rid of the heavy computations of learning-based methods. The proposed modules can easily be incorporated into existing sparse 3D CNNs without extra architectural modifications. Extensive experiments on the KITTI and nuScenes datasets demonstrate that our method can achieve more than 50% reduction in GFLOPs without compromising the performance.
Accept
The paper shows that it is possible to obtain a good saving in both terms of FLOPS and latency using sparse convolutions for 3d object detection by leveraging the magnitude of features. After a strong rebuttal all 4 reviewers vote for acceptance of the paper with high-confidence. I suggest that the authors incorporate the comments from reviewers and some of the results from the rebuttal to make the paper more immediately convincing upon a first reading.
train
[ "VLS98bvtQv", "VJAg41xViH", "uwthCSy6CCx", "2ApmzBcmzlO", "jAGnJfKpQS4", "OkjJlNs2m84", "2Sv-sPclQ2C", "KcsaNv8iDD23", "Lnr0U-W7w2R", "HwGY9dx-vci", "7mlXBxC0IEdb", "RdqYOYgp6qR", "LQZAKxxSqgG", "WFg8LnOVn-U", "t151OooZatl", "qRj2gV6q4L-", "o2yFdphOjcS", "dbnbi7oRQq", "aaJwubmzNIB" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer TzZm:\n\nWe are sincerely grateful for your positive feedback of our work. Thanks for your remind that the visualization should still be further explored. We are preparing and will add these to our paper.\n\nBest regards,\n\nPaper 1506 authors", " Thanks for the authors for providing the visualization and add the analysis for the relationship between magnitude and visualization to address my concern. While the methodology makes sense with those added analysis, I feel the visualization could still be further explored to understand better how feature magnitude varies in the training process before pruning. I will suggest to use heat map visualization for the feature magnitude. Despite the flaws in the representation, I do think this paper provided valuable and practical contribution to the field. I will changed my overall rating of this paper to be: 5: Borderline accept.", " We thank the reviewer again for the detailed discussions and the kind support of this work. Your constructive feedback and criticisms will help us greatly towards improving this work.\n", " Thanks for your reply which resolves most of my concerns! I will keep my original rating.", " Dear reviewer TzZm:\n\nThanks for your great efforts and insightful suggestions. We have provided responses to your concerns. If you still have other questions, we are happy to further discuss with you. Thanks for your time again.\n\nBest regards,\n\nPaper 1506 authors ", " Dear reviewers,\n\nThanks a lot for your time and efforts in reviewing our paper. We have tried our best to address all mentioned concerns. We would appreciate it if you can take a look at our response. Your feedback is valuable to us, and we are willing to have further discussions with you. \n\nBest regards, \n\nPaper 1506 Authors\n\n", " We sincerely thank the reviewer for the constructive feedback and support.", " Thank you for your reply which resolves most of my concerns, I will keep my score.\n\nNote: For Q3, I am talking about the implicit GEMM implementation.", " **Q1:The latency improvement of the SPS-Conv?**\n\n**A1**: Please refer to the Q1 in the Common question section. In the table, we show the latency reduction of SPS-Conv. Among them, SPS-Conv and SPRS can get a speed increase of more than 10% when used alone, and they can deliver a higher speed increase (around 20%) when used together. Note that our current implementations are based on *Pytorch functions* and *spconv 2.x* without implementation engineering. We believe that customized CUDA implementation will further help reduce the running time as the running time is closely also related to implementation besides the model complexity.\n\nWe will continue to optimize this code to make it more efficient. We are sorry that we cannot provide this because of the short rebuttal period. We commit to open source it as a variant of spconv to benefit the community.\n\n**Q2: The overhead introduced to caculate the important masks ?**\n\n**A2:** Thanks for the suggestion. Following the suggestion, we count the time it takes to generate the mask in the convolution. The results are as follows:\n\n| Method / speed(ms) | KITTI (VoxelNet) | KITTI (mask time) | nuScenes (VoxelResNet) | nuScenes (mask time) |\n| ------------------ | ---------------- | ----------------- | ---------------------- | -------------------- |\n| spss topk | 36 ms | 1.7 ms | 44 ms | 4.6 ms |\n| sprs topk | 33 ms | 0.4 ms | 44 ms | 0.9 ms |\n\n**Impact on latency:** It should be mentioned that the generation of masks is based on torch.argsort(). Since PyTorch optimizations are not ideal, this part does generate additional time consumption. And this effect is more pronounced as the number of points increases. At present, the time consumption generated by the mask is still within an acceptable range as shown in the table. We will use the divide and conquer algorithm to write a customized CUDA module to accelerate topk operation, which would further improve the latency. We are sorry for not being able to do this constrained by the short time of rebuttal. *Note that our model still obtains around a 20% overall reduction in latency even with this naive implementation without sacrificing accuracy.* \n\n**Q3:Whether such pruning will lead to load imblance between different CUDA threads and limit the speed up ?**\n\n**A3**: Thanks for your valuable comments. The calculation of Spconv is mainly divided into two parts (1) generating the index pair and (2) general matrix multiplication (GEMM). We analyze these two aspects separately:\n\nFirst of all, for the generation of index pairs, we implement it by constraining the output position based on the index mask. Specifically, we only need to pass the index mask into the kernel function as a parameter and use a rule to determine whether the original index pair satisfies the constraints of the index mask. We believe that this part does not account for a high proportion of the overall network inference time, as shown in the table in A2, the impact on CUDA threads thus can be ignored.\n\nSecondly, For GEMM, the implementation of spconv is calculated along the spatial dimensions of the kernel, eg. kernel size: 3x3x3. Different spatial locations are calculated at different iterations and will not affect each other. You might have the impression that there exists a large difference in terms of the number of points at different spatial locations, causing an imbalance in computation. However, we argue that this again will not lead to load imbalance between CUDA threads because different spatial positions are mapped to independent GEMMs and each GEMM is performed in a dense manner.", " **Q4: Performance, Compression ratio, and speedup on Waymo dataset?**\n\n**A4:** In this part, we evaluate our model on the Waymo dataset. Due to storage reasons, all experiments kept the batch size as 1 and tested on a single A100 GPU.\n\nWe report the performance (table in commen response Q2), speed, and FLOPs on the Waymo dataset in the following Table. Our method can effectively reduce GFLOPs (around 63%). Although, FLOPs cannot all translate into speed improvements. But we still have a nearly 20% speedup in latency reduction due to implementation optimization and hardware issues discussed in (Common respose Q1). We believe we still have a room for optimization to further improve the efficiency by implementing customized CUDA fuctions.\n\n| Method / speed(ms) | Waymo (VoxelResNet) | speed up | GFLOPs |\n| ------------------ | ------------------- | -------- | ------ |\n| baseline | 37 ms | None | 76.7 |\n| spss | 32 ms | 13.5% | 43.5 |\n| sprs | 33 ms | 11% | 55.2 |\n| sprs+spss | 30 ms | 19% | 28.8 |\n\n**Q5: The details of spconv in section 3 are somewhat redundant ?**\n\n**A5:** Thanks for the suggestion. We will put some details of spconv in the appendix according to your suggestion, and add Waymo and latency reduction results to the main text.", " **Q1: Large feature magnitude vs high feature absolute value in a channel**\n\n**A1:** Thanks for this insightful suggestion. We are very sorry that this question was overlooked when designing the experiment. According to the suggestion, we have the following views on this issue:\n\n**(1)** **Large overlap in selected points:** Here, we use channel-wise absolute mean (feature l_1 norm) and absolute max to select important positions and calculate their intersection portions. The experimental results show that the candidate sets selected by the two methods have an intersection rate of more than 87%. Therefore, we have reason to believe that there is a certain consistency of results between the two criteria because the samples whose average feature norm is small but feature norm on individual channels is large are minorities.\n\n**(2) Performance analysis:** Since our approach of using absolute mean to obtain magnitude has already achieved similar performance as the baseline, we think that even adding those outliers (those features with very large values on some specific channels) will not increase the performance any further.\n\n**Q2: Ablation results in table 6.**\n\n**A2:** Thanks for the suggestion. After rethinking table 6, we found that it may indeed bring some confusion, so we will further prove the effectiveness of our method from the following two aspects.\n\n1. Why is the performance drop limited in easy and hard cases?\n2. What is the effect of the inverse experiment on SPRS-Conv?\n\n**AS1: Performace drop analysis on SPSS-Conv**\n\n**(1) Background of the detection architecture:** The entire 3D detector contains two feature extraction parts: sparse CNN and BEVbackone. For sparse CNN, it takes a 3D point cloud as input and extracts spatial geometric information. In turn, the height information of these features will be compressed into the channel dimension, and the point cloud will be converted into a regular 2D feature. BEVbackbone, as a feature encoder composed of only 2D convolutions, can further obtain high-quality feature representation. Since the two feature extractors are coupled together, they are both indispensable for bounding box regression.\n\n**(2) Explanation of Table 6 on the KITTI dataset**: For convolution in unimportant positions in table 6, the easy and hard cases do not suffer from performance degradations. We think the most important cause for this phenomenon is the KITTI dataset which is a small-scale dataset and potentially has an in-balanced or skewed distribution of objects in different ranges. This makes the sparse CNN component more crucial to the moderate objects while less crucial for easy and hard cases. In order to verify this conjecture, we removed all the submanifold layers in the sparse CNN, leaving only the downsample layer to align the shape of the feature, the result is shown in the table below. \n\n| Method (KITTI) | Easy | Moderate | Hard |\n| --------------------- | ----- | -------- | ----- |\n| SPSS-Conv | 89.22 | 84.36 | 78.83 |\n| SPSS-Conv inverse | 89.15 | 79.13 | 78.47 |\n| basline no subm layer | 88.84 | 78.88 | 78.25 |\n\nSurprisingly, compared to the baseline, even if 3D convolution is not used for feature extraction, the performance of easy and hard will not fluctuate significantly. Therefore, we have reason to suspect that sparse CNN itself is redundant for these easy and hard objects on the KITTI dataset, which is caused by the characteristics of the data itself. \n\n**(3) Experiments on the nuScenes dataset:** For further evaluation, we replicate this ablation study on a much larger dataset — nuScenes to verify our claim. The results are shown in the following table:\n\n| Method | mAP | NDS | car | truck | bus | trailer | construction_vehicle | pedestrian | motorcycle | bicycle | traffic_cone | barrier |\n| ----------------- | ----- | ----- | ---- | ----- | ---- | ------- | -------------------- | ---------- | ---------- | ------- | ------------ | ------- |\n| SPSS-Conv | 58.48 | 66.11 | 85.0 | 58.2 | 69.5 | 35.7 | 15.5 | 85.3 | 58.8 | 40.9 | 70.0 | 68.1 |\n| SPSS-Conv inverse | 55.84 | 64.72 | 84.3 | 56.7 | 68.2 | 33.3 | 14.4 | 83.5 | 54.4 | 34.2 | 63.3 | 66.1 |\n\nBased on the results in the above table, it can be seen that when we reverse the important position, there is a considerable performance drop in all categories. This phenomenon is more obvious in small objects, especially for the traffic cone and bicycle categories (even reaching about a 7% performance drop). The results differences are caused by the difference in terms of size and point numbers. Large object categories (such as vehicles) with more points are less sensitive to the sampling method but still have a notable performance drop. We will add the ablation results in the paper.", " \n**AS2: Performance ablation on SPRS-Conv**\n\n**(1) Experiments on SPRS-Conv on KITTI and Nuscens:** We performed inversion ablation experiments on SPRS-Conv to further validate our proposed hypothesis. Similar to table 6 in main paper, we exchanged the important and unimportant area which means only the positions with low magnitude would be dilated. The experiments are conducted on both KITTI and nuScenes dataset. We report the results as below: \n\n| Method (KITTI) | Easy | Moderate | Hard |\n| ----------------- | ----- | -------- | ----- |\n| SPRS-Conv | 89.22 | 84.36 | 78.83 |\n| SPRS-Conv inverse | 70.36 | 49.81 | 44.06 |\n\n| Method(nuScenes) | mAP | NDS | car | truck | bus | trailer | construction_vehicle | pedestrian | motorcycle | bicycle | traffic_cone | barrier |\n| ----------------- | ----- | ----- | ---- | ----- | ---- | ------- | -------------------- | ---------- | ---------- | ------- | ------------ | ------- |\n| SPRS-Conv | 58.48 | 66.11 | 85.0 | 58.2 | 69.5 | 35.7 | 15.5 | 85.3 | 58.8 | 40.9 | 70.0 | 68.1 |\n| SPRS-Conv inverse | 16.72 | 39.29 | 29.6 | 9.4 | 16.3 | 1.7 | 0.6 | 24.2 | 11.8 | 3.6 | 29.0 | 41.1 |\n\n**(2) Analysis of Experimental Results**: Compared with the experimental results of SPSS-Conv, the inversion experiment of SPRS has a more exaggerated performance loss. This experimental phenomenon is also easy to understand. When inversion is not performed, SPRS will select a position with a large magnitude for expansion, and the rest will be suppressed. We believe that it is beneficial to the task, and the irrelevant features are reduced in the downsample part, which shows that there is no obvious performance loss in the result. However, when we choose to suppress positions with large magnitudes, the spatial redundancy is amplified, and important features cannot be effectively expanded, and even discarded due to downsampling. After the above features are converted to Bird's Eye View, since the number of foreground points from the input is weakened, the effective features extracted by BEVbackone are very limited, resulting in a great degree of performance degradation. We will add and discuss these results in the final version of the paper.\n\n**Q3: Why it is necessary to multiply the feature with the magnitude mask?**\n\nThank you for this great comment.\n\n**A3: (1) why multiplying magnitude mask:** For this problem, our initial purpose is to let the magnitude mask as a bridge to provide additional gradients for supervising the feature norm, further enhancing the difference between important and non-important features. As the network is end-to-end optimized for the object detection task, the additional gradient will not interfere with the original gradient but instead try to make areas that are important for detection have a larger magnitude.\n\n**(2) Necessity of the multiplication operation:** We do further investigation on whether this multiplication is necessary. We observe that it only brings marginal performance gains as shown in Table below. This further confirms that without any additional guidance, the magnitude of features from a detection network is sufficient to serve as a good criterion for deciding important vs unimportant regions. This strengthens our initial claim and echoes our motivation of using magnitude as a selection criterion.\n\n| Method (KITTI) | Easy | Moderate | Hard |\n| ------------------------ | ----- | -------- | ----- |\n| SPSS-Conv | 89.22 | 84.36 | 78.83 |\n| SPSS-Conv (not multiply) | 89.02 | 84.13 | 78.81 |\n\n| Method (nuScenes) | mAP | NDS |\n| ------------------------ | ----- | ----- |\n| SPSS-Conv | 58.48 | 66.11 |\n| SPSS-Conv (not multiply) | 58.27 | 66.01 |\n\n(3) We will make this clear in the final version. \n\n**Q4: The pruning ratio in the experiments is also not explained?**\n\n**A4:** The way of dynamic division in SPS-Conv is optional, such as using a fixed threshold or simply taking the elements with top-k scores. During our experiments, in order to better control the pruning ratio, we choose the top-k result as the indicator. Sorry for the confusion here, we will correct and note in the article.\n\n**Q5: L159: should this be absolute magnitudes instead of absolute means?**\n\n**A5:** Thanks for the suggestion. We will correct it to “calculate the channel-wise absolute mean values on different voxels”.\n", " **Q1:The latency reduction of the SPS-Conv ?**\n\n**A1**: Please check out Q1 of the common question section, where we provide detailed latency test results. In the table, we quantitatively present the contributions of individual components of our proposed method. Due to the difference in the densities of the point clouds of the two datasets and the numbers of convolutional layers of the sparse CNNs, a discrepancy on the acceleration effects is inevitable.\n\nThe results in Table (common question Q1) show that using SPSS-Conv or SPSS-Conv alone can achieve more than 10% speed improvement. When they are used in combination, there is a speed increase of around 20%. This means that our method can deliver a notable reduction in latency. Note that we just use standard Pytorch and CUDA combined programming and do not optimize the implementation, which indicates that there still exists room for the further latency reduction as latency is very relevant to implementations. We will add this to our paper.\n\n**Q2:Inference speed on different inference engines ?**\n\n**A2**: Thanks for the suggestion. In fact, when we started to build this project, we did consider the choice of the inference engine. As the codebase of spconv is more extensible, our first version of the code was built on spconv. Due to the limited time in the rebuttal period, we cannot complete the development of the other two inference engines in such a short time. We are committed to further porting our code into their codebase and reporting the results in our final version.\n\n**Q3:How does random voxel dropout work compared with magnitude-based pruning ?**\n\nThank you for this insightful question. We conducted experiments and analysis as below.\n\n**A3: Experiments on random voxel dropout:** We carried out the experiment of random drop ablation on both KITTI and nuScenes datasets. The ratio of random drop is set the same as our magnitude-based pruning: for KITTI, we set pruning ratios in SPSS-Conv and SPRS-Conv as 0.5 and 0.5 respectively; as for nuScenes, they are set as 0.3 and 0.5. The table below shows the performance comparison of random drop and magnitude as indicators.\n\n| Method (nuScenes) | mAP | NDS |\n| --------------------- | ----- | ----- |\n| SPSS-Conv | 58.48 | 66.11 |\n| SPSS-Conv inverse | 55.84 | 64.72 |\n| SPSS-Conv random drop | 56.12 | 64.49 |\n| SPRS-Conv | 58.59 | 66.23 |\n| SPRS-Conv inverse | 16.72 | 39.29 |\n| SPRS-Conv random drop | 55.58 | 64.34 |\n\n| Method (KITTI) | Easy | Moderate | Hard |\n| --------------------- | ----- | -------- | ----- |\n| SPSS-Conv | 89.22 | 84.36 | 78.83 |\n| SPSS-Conv inverse | 89.15 | 79.13 | 78.47 |\n| SPSS-Conv random drop | 89.14 | 83.21 | 78.57 |\n| SPRS-Conv | 89.64 | 84.26 | 78.91 |\n| SPRS-Conv inverse | 70.36 | 49.81 | 44.06 |\n| SPRS-Conv random drop | 89.32 | 78.81 | 78.28 |\n\n**(1) Magnitude-base pruning vs Random drop**: compared to magnitude-based pruning, we observe using random drop as an indicator will lead to a certain loss in performance (around 2%)。This is caused by the randomness, part of the foreground is discarded, resulting performance degradation. However, the important part still has a 50% chance of being selected, which also guarantees performance to a certain extent.\n\n**(2) Analysis on random drop**: Randomly dropping points obtain reasonable results on both datasets. This further confirms our observation about the extreme imbalance of foreground and background. Even randomly dropping points, we still have a reasonable chance of dropping useless points. \n\n**(3) Drawback of random drop**: Despite of its degraded performance, the random drop method also has a certain degree of randomness. This is not desirable in practical applications as it may have the chance to lose some safety-critical areas which will cause problems in safety-critical applications. \n\n**Q4:How does the proposed SPS-Conv work on Waymo with much denser point cloud?**\n\n**A4:** We show the results of the proposed method on the Waymo dataset in Q2 of the Common question. As shown in the table, our method is also able to maintain competitive performance on various metrics on this dataset while saving 63% GFLOPs. This further illustrates the generality of our method. We will add and discuss these results in the revised version of the paper.", " **Q1: It is not clearly stated why magnitude is the key to discriminate foreground and background points?**\n\nThank you for the comment.\n\n**Visual Analysis:** To better understand points pruned by our magnitude criterion, we visualize point clouds before and after pruning. Note that the point clouds used for visualization are randomly chosen from the nuScenes dataset. The comparison results are shown in the link [[visual](https://drive.google.com/drive/folders/1aoQOrYRB57tKGHymMg3IuS2DRoLh00wR?usp=sharing)], we provide the original image and the pruned image with the file names _raw.png and _im.png respectively. And we roughly annotate the positions of cars (red) and pedestrians (yellow). we observe that most of the foreground points are preserved. For the background areas, points that fall in vertical structures, such as light, poles, and trees, are also preserved as they tend to be hard negatives, and easily confused with foreground objects. These points require a deep neural network with a certain capability to process in order to recognize them as background. In contrast, background points in flat structures such as road points are largely removed because they are easily identifiable redundant points.\n\n**Why foreground points with high feature magnitude?** To gain more insights into why high feature magnitude corresponds to the above patterns, we conjecture that this is caused by the training objective in 3D object detection. When training a 3D object detection model, the focal loss is adopted as default in 3D object detection. When we look closer at the focal loss, it will incur a loss on positive samples and hard negatives while easy negatives are removed from the loss. Thus, this will generate gradients in the direction that can incur an update of features for areas with positive samples and hard negatives. This can eventually make a difference in their feature magnitudes in comparison with areas for easy negatives which are less frequently considered in the optimization objective.\n\nWe will add the above to our paper.\n\n**Q2: Feature representations in the point cloud.**\n\nThanks for the comment. We hope our explanation in Q1 resolves your concern regarding our selection criterion. As we remove redundant points in intermediate layers, this indeed will have an impact on feature representation learning as the topology of point clouds might change. But as our performance doesn’t drop, this change of topology at least is not harmful to model performance and is effective in maintaining the original capability of our model. This in turn reflects that our selection criterion is successful which removes points but can still maintain model effectiveness. Also, since the model is optimized end-to-end, representation learning and spatial pruning based on magnitude are integrated together as a whole, it is difficult to quantify the contribution of each one solely. We will add this to our paper.\n", " Dear all reviewers:\n\nWe appreciate all the reviewers' valuable time and suggestions. In this section, we will first give detailed answers to some common questions, followed by responses to each reviewer separately.\n\n**Q1: The latency reduction achieved with the proposed SPS-Conv (Reviewer-cutj/Reviewer-aQxF)**\n\nThanks for the insightful comments. We conduct experiments and perform an analysis on this aspect as below.\n\n(1) Latency reduction experiments: Here we also provide the latency reduction by our approach as shown in the following table. We evaluate the effects of SPSS-Conv and SPRS-Conv alone and their combination on the KITTI and nuScenes datasets. All experiments are tested on the same server, using a single 2080ti GPU with the batch size set to 1. We also address specific questions/concerns regarding this study. Please refer to our separate response to each reviewer.\n\n| Method / speed(ms) | KITTI (VoxelNet) | speed up | GFLOPs | nuScenes (VoxelResNet) | speed up | GFLOPs |\n| :----------------: | ---------------- | -------- | ------ | ---------------------- | -------- | ------ |\n| baseline | 40ms | None | 7.6 | 50ms | None | 62.9 |\n| spss | 36ms | 10% | 4.51 | 44ms | 12% | 45.2 |\n| sprs | 33ms | 17.5% | 4.24 | 44ms | 12% | 47.2 |\n| sprs+spss | 30ms | 25% | 3.6 | 41ms | 18% | 34.3 |\n\n(2) **Latency vs FLOPs:** As shown in the table above, the reduced FLOPs cannot all translate into a reduction in latency but we still obtain a notable reduction in latency (KITTI 25%, nuScenes 18%) without sacrificing accuracies. This is because, besides the FLOPs of the model itself, latency is also closely related to hardware-level implementations and optimizations. Note that our model is implemented using PyTorch functions without specific optimization. Better hardware-level implementation of operations has the potential to reduce latency, which is beyond the investigation of this paper and will be our future work.\n\n(3) **Importance of FLOPs in energy efficiency:** Besides, given the same hardware, the number of FLOPs will determine the amount of energy consumption [1] at the inference stage: FLOPs measure the total number of operations required to compute the output, and each operation will cost certain energy depending on the hardware. Therefore, the large reduction in FLOPs also has positive impacts on practical applications in power-constrained scenarios.\n\n1. Desislavov, Radosvet, Fernando Martínez-Plumed, and José Hernández-Orallo. \"Compute and energy consumption trends in deep learning inference.\" *arXiv preprint arXiv:2109.05472* (2021).\n\n(4) We will add all the above in our paper.\n\n**Q2: Experimental results on Waymo dataset (Reviewer-cutj/Reviewer-aQxF)**\n\nThank you for the suggestion to help improve our paper. In accordance with the requirements of R2 and R4, we conduct experiments on Waymo, a dataset with a larger amount of dense point cloud data.\n\n(1) **Experimental setting:** Due to the limited time for rebuttal, we follow the setting in OpnePCDet and conduct experiments on one-fifth of the sub-data set which is an standard setting and doesn’t sacrifice the performance much. We choose a strong method, CenterPoint, as our baseline. The experimental results are shown below.\n\n| Performance@(train with 20% Data) | GFLOPs | Vec_L1 | Vec_L2 | Ped_L1 | Ped_L2 | Cyc_L1 | Cyc_L2 |\n| --------------------------------- | ------ | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |\n| CenterPoint (ResNet) | 76.7 | 72.76/72.23 | 64.91/64.42 | 74.19/67.96 | 66.03/60.34 | 71.04/69.79 | 68.49/67.28 |\n| CenterPoint + SPSS | 43.5 | 72.46/71.91 | 64.35/63.85 | 73.71/67.53 | 65.81/60.13 | 70.85/69.63 | 68.52/67.27 |\n| CenterPoint + SPRS | 55.2 | 72.80/71.96 | 64.36/64.07 | 73.99/67.84 | 66.05/60.40 | 70.42/69.69 | 68.85/67.37 |\n| CenterPoint + SPSS + SPRS | 28.8 | 72.68/72.13 | 64.66/64.16 | 74.39/68.02 | 66.00/60.36 | 71.08/69.84 | 68.47/67.28 |\n\n(2) **Analysis of results:**\n\nThe above experimental results demonstrate the generality of SPS-Conv on large-scale datasets with dense point clouds: SPS-conv still maintains the high performance while significantly reduces FLOPs. We will also include the above results in the revised version of the paper.\n\n\n\n#### Other Concerns \n\nOther concerns regarding typo, clarity and figures have been properly addressed in the revised paper.\n", " This paper proposed a non-learning convolution operator that simply use magnitude of feature as cue to remove redundant background points in the sparse convolution operation. The authors validated the proposed operator by combining it to popular sparse convolutional network based object detection and achieve more than 50% reduction in GFLOPs without compromising the performance on benchmark datasets. Strengths:\n- Simplicity of the proposed method\n\nWeaknesses\n- It is not clearly stated why magnitude is the key to discriminate foreground and background points. In the experimental section, the authors combines the operator with existing object detection networks CenterPoint and Voxel R-CNN. But the effectiveness of the methods seems to be more related to feature/representation of the input point-cloud and the authors didn’t discuss that part in the paper. Is it possible that the effectiveness of the propose method comes from the fact that background points has been filtered out by magnitude, thanks to the characteristics of the input point cloud feature representation used in the experiment? I suggest the author to add visualization point cloud that belongs to important set to help with understanding. ", " This paper studies the spatial redundancy in LiDAR-based 3D object detection. The authors propose spatial pruned sparse convolution (SPS-Conv) that skips computation for voxels with lower activation values. The authors have evaluated their proposed method on KITTI and nuScenes, achieving a 50% reduction in #MACs without loss of accuracy. **[Strengths]**\n\nThe paper is well-written and easy to follow. The proposed SPS-Conv is well-motivated, technically sound and achieves good empirical performance on two large-scale benchmarks. The authors have provided sufficient implementation details, which could facilitate the reproduction. The authors have also promised to release the code upon acceptance.\n\n**[Weaknesses]**\n\nThe authors use #GFLOPs as the primary efficiency metric throughout the paper. However, the reduction in #MACs does not necessarily translate into the measured speedup on the actual hardware (due to other costs such as data movement). I would highly recommend the authors report the measured latency on GPUs. As 3D object detectors contain more than sparse convolutions (since there are extra computations in the BEV decoder), it is completely acceptable to just present the latency of the sparse LiDAR encoder.\n\nThe inference engine has a substantial impact to the final inference speed. It would be great if the authors could try out the three available sparse convolution inference engines, including SpConv, TorchSparse and MinkowskiEngine, to see whether the proposed method can achieve consistent speedup with all of them.\n\nThe authors claim that the activation magnitude serves as a good indicator of selecting important voxels. It would be more interesting to see how random voxel dropout works (in additional to comparing with the inverse magnitude baseline).\n\nFinally, the authors have evaluated their proposed method on KITTI and nuScenes. It would be great if the authors could also evaluate their method on Waymo since the point cloud data on Waymo is much denser (and could be more redundant). I would love to see the answer to the following questions in the rebuttal:\n* What is the latency reduction achieved with the proposed SPS-Conv?\n* How does random voxel dropout work compared with magnitude-based pruning?\n* How does the proposed SPS-Conv work on Waymo with much denser point clouds?\n\nThe authors could refer to the previous section for more detailed comments.\n The authors have addressed the limitations and potential negative societal impact of their work.\n", " This paper presents a method for dynamically determining which features on which to perform convolutions in a sparse convolutional network. In particular, the authors apply a sigmoid on top of the magnitude of each feature to determine a score, on which they threshold to determine active sites in each feature map. The motivation is that features with higher magnitude are more important to the network. In the forward pass, only features with a score above a set threshold have convolutions applied, while low score features are passed through via a skip connection. Features are also weighted by this score. Experiments are performed by replacing of a regular sparse convolutional network with the proposed dynamic sparse convolution blocks, where minimal performance regressions are seen despite significant reductions in GFLOPs. Ablations are also provided demonstrating that large score thresholds can be used with minimal impact on performance. The authors provide a relatively simple solution to the dynamic sparse convolution problem, given the intuition that higher feature norms correlate with more important features. The method can be dropped into any existing sparse convolution network, and is demonstrated to have significant reductions in computation cost. The sparsity can also be controlled at test time by tuning the score threshold. The method is well explained with nice figures outlining the pipeline, and the results are compelling, with good coverage across two datasets. \n\nFirstly, the intuition that features with higher norms are more important mostly makes sense, but some experimental validation would be very helpful. In particular, as the convolution operator applies the dot product between features, it's possible that a feature has relatively low magnitude compared to others, but high magnitude in a channel that also has high magnitude in the convolutional kernel. This may not be captured by the current approach which looks at the overall norm. \n\nA more concerning issue I see is the ablation in Table 6, where experiments are performed by only keeping the lowest scored features. While there is a significant regression in the medium cases, the difference for easy and hard cases is actually very minimal. The text describes this as a dramatic performance drop, but it's not convincing that this is actually the case. If the network is able to still achieve similar performance on hard examples when using what should be completely irrelevant features, what does that say about the foundational assumption that high norm features are the most important?\n\nAlso, in (4), the features are further weighted by the score for that feature. It's unclear why this is necessary, compared to only using the scores to select sites for convolutions. Does this weighting improve performance or provide some theoretical guarantees? Some ablations for this would be helpful. Otherwise, if the base assumption is not always correct, this weighting may serve to further remove the effect of some potentially important features.\n\nThis weighting also serves to provide a gradient on the feature norms (perhaps this is the intention?). This seems like it could be useful, but might also interfere with the final detection task. Some explanations/ablations would be useful.\n\nThe pruning ratio in the experiments is also not explained. It sounds like it is the top_k features, but could also be the score threshold. \n\nL159: should this be absolute magnitudes instead of absolute means? Please see the strengths and weaknesses section. My main concern regards the ablations in Table 6, which would be alleviated with more explanations/exploration of the underlying assumption of this work. No limitations section is provided. Some discussion of the validity of the base assumption would help future readers.", " In this paper, the authors study a previously under-explored direction in 3D deep learning acceleration: spatial redundancy. The major observation from the authors is that most points in the large scale LiDAR scans are background points. Thus, it is possible to reduce the computation on these points. The authors then design spatial pruned (submanifold/regular) sparseconv layers using magnitude-based importance masks. Experiment results show 2x FLOPs reduction without accuracy loss. Strengths:\n\n- As I have mentioned in the summary, this work explores spatial redundancy in 3D point clouds, a new direction in efficient 3D deep learning.\n- The paper is clearly written and most of the figures look good to me.\n- Experiment results are promising: 2x FLOPs reduction with no accuracy loss on nuScenes. Note that the baseline is a representative network, CenterPoint.\n\nWeaknesses:\n\n- The biggest question related to this work is whether such reduction in FLOPs can be translated to latency improvement. Especially when you consider latest sparse conv implementations in spconv 2.x which uses highly customized computation kernels, I'm a little bit concerned whether such pruning will lead to load imbalance between different CUDA threads and limit the speedup. Also, I'm curious what is the overhead introduced to calculate the importance masks.\n- It will be great if the authors can add more experiments on larger-scale datasets, such as Waymo. Waymo is much denser than nuScenes, it will be interesting to see the compression ratio and speedup. If the paper is eventually accepted, I'm also interested in seeing some application in sensor fusion methods on nuScenes.\n- [Minor] I would suggest the authors to make Section 3 more compact or move some details to the appendix as the target readers can be very familiar with sparse conv. Please address my comments in \"Weaknesses\". See my comments in \"Weaknesses\". The major concern is on latency reduction and performance on larger scale datasets such as Waymo." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "VJAg41xViH", "WFg8LnOVn-U", "2ApmzBcmzlO", "LQZAKxxSqgG", "qRj2gV6q4L-", "nips_2022_QqWqFLbllZh", "KcsaNv8iDD23", "aaJwubmzNIB", "aaJwubmzNIB", "aaJwubmzNIB", "dbnbi7oRQq", "dbnbi7oRQq", "o2yFdphOjcS", "qRj2gV6q4L-", "nips_2022_QqWqFLbllZh", "nips_2022_QqWqFLbllZh", "nips_2022_QqWqFLbllZh", "nips_2022_QqWqFLbllZh", "nips_2022_QqWqFLbllZh" ]
nips_2022_H5z5Q--YdYd
BMU-MoCo: Bidirectional Momentum Update for Continual Video-Language Modeling
Video-language models suffer from forgetting old/learned knowledge when trained with streaming data. In this work, we thus propose a continual video-language modeling (CVLM) setting, where models are supposed to be sequentially trained on five widely-used video-text datasets with different data distributions. Although most of existing continual learning methods have achieved great success by exploiting extra information (e.g., memory data of past tasks) or dynamically extended networks, they cause enormous resource consumption when transferred to our CVLM setting. To overcome the challenges (i.e., catastrophic forgetting and heavy resource consumption) in CVLM, we propose a novel cross-modal MoCo-based model with bidirectional momentum update (BMU), termed BMU-MoCo. Concretely, our BMU-MoCo has two core designs: (1) Different from the conventional MoCo, we apply the momentum update to not only momentum encoders but also encoders (i.e., bidirectional) at each training step, which enables the model to review the learned knowledge retained in the momentum encoders. (2) To further enhance our BMU-MoCo by utilizing earlier knowledge, we additionally maintain a pair of global momentum encoders (only initialized at the very beginning) with the same BMU strategy. Extensive results show that our BMU-MoCo remarkably outperforms recent competitors w.r.t. video-text retrieval performance and forgetting rate, even without using any extra data or dynamic networks.
Accept
This work presents a study on continual video-language modeling. In addition to the modeling side of things (BMU-MoCO), the authors construct a new benchmark in which they compare a number of existing methods. While I think it's great that the authors came up with a new benchmark, it's always a somewhat difficult analysis when a paper comes up with both a new benchmark and a method that beats the previous methods on this new benchmark. This is a shared concern with at least one of the reviewers. I do note that computational limitations make it difficult for the authors to thoroughly test on many other benchmarks. The authors do provide some UCL results in the rebuttal, which strengthen their case. All in all, I have to agree with reviewer ZHhB that the method is somewhat complicated and that the gains from the global branch seem overstated. I found the methods part hard to follow as well. All in all, the work does seem interesting and important, but perhaps I am more convinced about the benchmark rather than the method. As is, I would still recommend it for acceptance, but I feel it would need a good amount of work to improve clarity of the exposition and ideally more robust empirical evidence (that doesn't involve benchmarks created by the authors themselves).
train
[ "kzlelIfENv", "6zl3ymWKhKK", "S1FDyfIpfOZ", "xPfmENZFPc", "Hclm6svFN2WN", "lSUq70lL6-8", "KZ5Id0HpUMm", "W3xlJKxni5l", "3uxolWR4eHh", "DcopD6h7jSS", "anXLYgLcawF", "2TPPIFGSs-_", "wFUi9weFn0h", "pwAgPw6jSiY", "srJqej-zfHT", "H3SiN9K4NZ_" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate all reviewers’ time and efforts in reviewing our paper. We are glad to find that reviewers generally recognized our contributions: \n\n* **Model.** Proposing a novel cross-modal MoCo-based model with bidirectional momentum update for continual learning [6RoA, ZHhB].\n* **Setting.** Introducing a new setting of the Video-Language Modeling task in a continual learning scenario [6RoA, ZHhB, oj9S]. \n* **Experiment.** The experimental results on the proposed benchmark are promising [6RoA, ZHhB, oj9S].\n* **Writing.** The Paper is well-structured [oj9S]; the contribution is clearly stated and supported in the experimental section. [oj9S]; \n\nAnd we also thank all reviewers for their insightful and constructive suggestions, which help a lot in further improving our paper. In addition to the pointwise responses below, we summarize supporting experiments added in the rebuttal according to reviewers’ suggestions.\n\n**New Experiments** \n\n* Comparison among different methods on unsupervised continual learning (UCL) setting [6RoA].\n* More results by applying our BMU-MoCo for continual reinforcement learning [ZHhB].\n* The training stability analysis of the final loss [6RoA].\n\nWe are glad to see that our response address most of Reviewer oj9S’s concerns. We hope that our pointwise responses below could clarify all reviewers’ confusion and alleviate all of their concerns. We'd like to thank all reviewers’ time again.\n", " Dear Reviewer 6RoA,\n\nThanks for your feedback again, which will help us further improve our final paper!\n\n**Q2. \"Multiple-positives infoNCE loss (our loss is two-positives infoNCE loss)\". The paper you attach is not open accessible... I guess that your Equation (16) and other equations involving with two-positives may be not correct. You may refer to your code to check whether it exactly matches with Equation (16).** \\\n**A2.** \nSorry for the inaccessible literature and we have replaced it with an accessible one. We agree that training stability is vital and there only requires one exponential term on the numerator in the original InfoNCE loss. Nevertheless, in our BMU, we utilize global encoders to further preserve earlier knowledge in addition to local encoders and there exist two positive samples for each video/text to obtain our loss $L_{final}$. Therefore, we slightly modified the InfoNCE loss with multiple positive targets following [1, 2] (two terms on the numerator), which is consistent with the MIL-NCE [1] loss. We have also double-checked the code and made sure it was consistent with Equation (16).\n\n\n**Q3. By \"200G memory\", do you mean RAM or disk space?** \\\n**A3.** \nYes, 200G memory means the RAM. Since memory-based methods need to be trained with memory data (which is supposed to be loaded into the RAM), these methods bring a huge computation resources cost. Though our machine has 500 GB RAM which can easily handle it, these methods still make it harder to apply in real-world applications. Note that our BMU only needs 0.5 GB extra RAM in total to save the global momentum encoders. Therefore, it is much more computational friendly. \n\n*Thanks for your time again! Please don’t hesitate to let us know, if there are any additional clarifications we can offer. Look forward to your post-rebuttal rating!*\n\nBest, \\\nAuthors\n\n[1] Miech, A., Alayrac, J. B., Smaira, L., Laptev, I., Sivic, J., \\& Zisserman, A. End-to-end learning of visual representations from uncurated instructional videos. CVPR 2020.\\\n[2] Huo, Y., Ding, M., Lu, H., Fei, N., Lu, Z., Wen, J. R., \\& Luo, P. Compressed video contrastive learning. NeurIPS 2021.", " Thank you very much. We will carefully improve the writing in the final version.", " Thank you for providing all the due responses to my questions and concerns. Given that you addressed most of my remarks, I revised my score. ", " 2. \"multiple-positives infoNCE loss (our loss is two-positives infoNCE loss)\". The paper you attach is not open accessible... I guess that your Equation (16) and other equations involving with two-positives may be not correct. You may refer to your code to check whether it exactly matches with Equation (16).\n3. By \"200G memory\", do you mean RAM or disk space?", " Thanks for your post-rebuttal feedback! For the new questions, our answers are given as follows:\n\n**Q1. It's good to see the extra results on UCL. I'm afraid that it may need another peer review to assess the soundness and significance of the new results.** \\\n**A1.** Thanks. Our extra results on UCL are convincing, since we have strictly followed the UCL setting proposed in LUMP (Representational continuity for unsupervised continual learning, ICLR 2022). More importantly, we can clearly observe the performance gain over the state-of-the-arts obtained by our BMU-MoCo, demonstrating the effectiveness of our BMU-MoCo in other continual learning settings. Overall, we think that our new results are convincing and important for the continual learning community.\n\n**Q2. The authors may not understand my question. The log-softmax implementation requires only 1 exponential term on the numerator (the number above the line in a common fraction), but in Equation (16) there are two exponential terms. More details of the implementation with the log-softmax function will be useful.** \\\n**A2.** Good suggestion. In PyTorch, we regard the two terms on the numerator in Equation (16) as one term, and then define our contrastive loss exactly the same as the standard log-softmax function. Note that such contrastive loss is known as the multiple-positives infoNCE loss (our loss is two-positives infoNCE loss), which has been widely used in recent works on contrastive learning, such as [1].\n\n**Q3. \"This buffer requires 200 GB to save these extra data pairs.\" In this case, this buffer can not saved in memory. How does the author implemented the ER-ring with such large buffer?** \\\n**A3.** Sorry for the confusion. This buffer can be directly saved in memory, since our machine has 500 GB memory in total.\n\n[1] Miech, A., Alayrac, J. B., Smaira, L., Laptev, I., Sivic, J., \\& Zisserman, A. End-to-end learning of visual representations from uncurated instructional videos. CVPR 2020.\n", " Dear Reviewer oj9S,\n\nThanks again for your insightful suggestions and comments. As the deadline for discussion is approaching, we are happy to provide any additional clarifications that you may need.\n\nIn our previous response, we have carefully studied your comments and provided new experiments/additional explanations. We hope that our response has convinced you of the merits of our submission.\n\nPlease do not hesitate to contact us if there are other clarifications or experiments we can offer. Thanks a lot!\n\nBest,\\\nAuthors", " Dear Reviewer 6RoA,\n\nThanks again for your insightful suggestions and comments. As the deadline for discussion is approaching, we are happy to provide any additional clarifications that you may need.\n\nIn our previous response, we have carefully studied your comments and made detailed responses summarized below:\n\n* Clarified the proposed benchmark for the continual video-language modeling (CVLM) setting.\n* Conducted additional experiments on unsupervised continual learning (UCL) to show the effectiveness of the proposed method on other continual learning benchmarks.\n* Added visualization results to show the stability of our contrastive loss implemented.\n* Provided more details on the implementations of all the baselines.\n\nWe hope that the provided new experiments and additional explanations have convinced you of the merits of our submission.\n\nPlease do not hesitate to contact us if there are other clarifications or experiments we can offer. Thanks a lot!\n\nBest,\\\nAuthors", " 1. It's good to see the extra results on UCL. I'm afraid that it may need another peer review to assess the soundness and significance of the new results.\n2. The authors may not understand my question. The log-softmax implementation requires only 1 exponential term on the numerator (the number above the line in a common fraction), but in Equation (16) there are two exponential terms. More details of the implementation with the log-softmax function will be useful. \n3. \"This buffer requires 200 GB to save these extra data pairs.\" In this case, this buffer can not saved in memory. How does the author implemented the ER-ring with such large buffer? ", " **Q6. Question 2: Could you explain in detail what happens when the queue size Nq is larger than Nb?** \\\n**A6.** The queues used in our BMU-MoCo are the same as those in MoCo [a] (and cross-modal MoCo). Typically, the queue size is set to be much larger than the batch size to save a large quantity of negative samples. Concretely, after trained on each mini-batch with the batch size $N_b$, the extracted features are pushed into the queues (while the earliest batches are popped out) and the features stored in the queues are used as negative samples for contrastive learning. Please see MoCo [a] for more details.\n\n**Q7. Question 3: What does FR for Task 1 indicate? Is there a Task 0 then? The comparison is not clear.** \\\n**A7.** We have defined the Forgetting Rate (FR) in Lines 209--212 of our main paper. Note that the results in Table 1 are obtained by the final model $M_5$. Therefore, according to our definition, the FR for Task 1 is the performance degradation on Task 1 when the model is trained after all 5 tasks (i.e., $A_1^1 - A_1^5$).\n\n**Q8. Question 4: Is 0.5 additional GB of memory for BMU-MoCo local only or both?** \\\n**A8.** Sorry for the confusion. 0.5 GB is only for our full BMU-MoCo (local+global), which represents the additional memory for saving global momentum encoders. It becomes 0 GB for our BMU-MoCo (local), since all methods are implemented based on the same architecture (Base-MoCo).\n\n**Q9. Question 5: How are frames sampled and fed to ViT and how is the averaging over the whole video being performed?** \\\n**A9.** Frames are randomly and uniformly sampled (8 frames per video), which is widely-used in recent video-language modeling works (e.g., ClipBERT [b] and Frozen [c]). After extracted all frame features, we simply average them to obtain the whole video features (see Section 3.2).\n\n**Q10. Limitations: In my opinion, the limitations of this work are two-fold. First, as the authors mention, they only tackle the CVML task, however, to fully address this task, the results of state-of-the-art approaches on particular datasets should also be included, showing that they indeed struggle with catastrophic forgetting. Otherwise, it would be beneficial to address other cross-modal tasks.** \\\n**A10.** In this work, we choose to study the CVLM setting based on cross-modal MoCo, and the results in Figure 1 show that the catastrophic forgetting problem indeed exists. Since the state-of-the-art approaches to VLM including COTS [32] and HiT [30] have similar cross-modal MoCo architectures, they would also suffer from catastrophic forgetting. Therefore, our study on the CVLM setting is vital for video-language modeling with streaming data. Additionally, our proposed BMU-MoCo is generalizable and can be transferred to other cross-modal tasks or other continual learning settings (see our response to Q5).\n\nWe wish that our response has addressed your concerns, and turns your assessment to the positive side. If you have any questions, please feel free to let us know during the rebuttal window. We appreciate your suggestions and comments! Thank you!\n\n[a] He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Momentum contrast for unsupervised visual representation learning. CVPR 2020.\\\n[b] Lei, J., Li, L., Zhou, L., Gan, Z., Berg, T. L., Bansal, M., and Liu, J., Less is more: ClipBERT for video-and-language learning via sparse sampling, CVPR 2021.\\\n[c] Bain, M., Nagrani, A., Varol, G., and Zisserman, A., Frozen in time: A joint video and image encoder for end-to-end retrieval, ICCV 2021.", " Thank you for the constructive comments and suggestions.\n\n**Q1. Weakness 1, 2, 4: The method section is a bit messy and becomes hard to follow. Notations in Sec. 3 are definitely over complex, and could be simplified. Some typos and minor editing issues.** \\\n**A1.** Thanks. We have carefully polished our paper.\n\n**Q2. Weakness 3: The method is not entirely free from additional resources contrary to what the authors claim, since it requires an additional model (momentum encoder- local + global) to be trained and stored, as well as the queue for the training.** \\\n**A2.** There seem to exist some misunderstandings. Note that the traditional MoCo [a] and its cross-modal versions (COTS [32] and HiT [30]) all utilize momentum encoders and queues to construct the contrastive learning objectives. In fact, it has been clearly claimed in MoCo that using momentum encoders and queues can greatly reduce the computational cost during training, since it can adopt a small batch size while still maintaining a large queue of negative samples (which is essential in contrastive learning). In this paper, our BMU-MoCo and all the competitors are based on the same basic Base-MoCo with momentum encoders and queues. Under such fair setting, we evaluate our BMU-MoCo by comparing it to all the competitors. Specifically, we have proposed two BMU-MoCo models, one only utilizes local momentum encoders and the other utilizes local+global momentum encoders: (1) For the former BMU-MoCo (local), it has already outperformed all the competitors with exactly the same architecture of Base-MoCo (i.e., without using any extra memory and dynamic networks). (2) For the latter BMU-MoCo (local+global), although it maintains more momentum encoders and queues than BMU-MoCo (local), the additional cost is limited (0.5 GB in total) and fixed (as the task number grows) while achieving better performance. In conclusion, our BMU-MoCo (local) beats all the competitors under a fair setting and our BMU-MoCo (local+global) further brings performance boost with limited extra cost.\n\n**Q3. Weakness 5: The conclusions on the m hyperparameter are unclear.** \\\n**A3.** Thanks for pointing this out. Since the hyper-parameter m is for the basic MoCo architecture, we directly set m to 0.99 for all methods (the same m is used in COTS [32]), which has been stated in Lines 269--270 of our main paper. In addition, the effect of the hyper-parameter $\\hat{m}$ has been clearly analyzed in Lines 268--279 of our main paper.\n\n**Q4. Weakness 6 and Question 1: Both figures 1 are non-informative and confuse the reader. In Figure 1a) what do colors represent? What are the current models and the final model?** \\\n**A4.** Sorry for the confusion. Note that we have explained the concept of current models and final model in Lines 36--38 of our main paper. To be more specific, the CVLM setting has a sequence of 5 tasks and the models are supposed to be sequentially trained on all these tasks. Therefore, the result of the current model on Task i is obtain by evaluating the model on Task i right after trained on Task i (before it is trained on Task i+1); the result of the final model on Task i is obtain by evaluating the model on Task i after trained on all 5 tasks. Particularly, the results of the current and final models on Task 1 in Figure 1(a) show that the performance of Base-MoCo (on Task 1) drops significantly after trained on all 5 tasks. \n\n**Q5. Weakness 7: Lack of clear motivation why CVLM among other cross-modal tasks and evaluation only on one of them seems to be quite limited.** \\\n**A5.** It is worth noting that our proposed BMU-MoCo is indeed generalizable to other cross-modal tasks (e.g., image-text pre-training setting), since the two-stream architecture is generic for these tasks. However, considering the resource consumption and the paper conciseness, we choose to study one task in this work, i.e., video language pre-training with streaming data, which has important practical significance but has not been discussed in earlier works. To that end, we propose a challenging/novel Continual Video Language Modeling (CVLM) setting and conduct extensive experiments on five video-text datasets to verify the effectiveness of our proposed BMU-MoCo. More importantly, our BMU-MoCo could be easily transferred to other continual learning settings, such as unsupervised continual learning (see our response to Q1 of Reviewer 6RoA) and continual reinforcement learning (see our response to Q1 of Reviewer ZHhB). \n\n[a] He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Momentum contrast for unsupervised visual representation learning. CVPR 2020.\n", " Thank you for the positive comments and insightful suggestions.\n\n**Q1. This paper focuses on the video-language modeling in continual learning, however, the experiments only conducted on the text-to-video retrieval task. Since the proposed framework is two-stream architecture and is suitable for adjusting to the other video-language tasks, the experiments could be further investigated into multi-task transfer learning (A deep hierarchical approach to lifelong learning in Minecraft, AAAI17) in VLM domain, instead of on the single task. Could the author conducted more experiments on the multi-task transfer learning part?** \\\n**A1.** Good advice! As the reviewer mentioned, we focus on continual video-language modeling (CVLM) in this paper. Concretely, we choose to build our CVLM benchmark on the fundamental task (i.e., video-text retrieval), since video-language pre-training with streaming data is a realistic problem of practical significance. Meanwhile, the five video-text datasets of our benchmark come from different domains, which makes the CVLM setting even harder. Therefore, our proposed CVLM setting is both important and challenging. That said, it is still a good advice for us to investigate multi-task transfer learning using our BMU-MoCo. We then train a baseline model DrQ [a] and its BMU version DrQ+BMU under a multi-task transfer learning setting with three different visual control tasks of DMcontrol (which can also be called \"continual reinforcement learning\"). We obtain the evaluation rewards of the final models on all three tasks (i.e., Walker-Stand, Walker-Walk, and Walker-Run) as follows:\n\n| Method | Walker-Stand | Walker-Walk | Walker-Run |\n|----| -----: | -----: | -----: |\n|DrQ [a]| 540 | 595 | 383 |\n|DrQ+BMU| $\\bf679$ | $\\bf837$ | $\\bf413$ |\n\nWe can observe that our BMU strategy can effectively alleviate the catastrophic forgetting problem in multi-task transfer learning (i.e., continual reinforcement learning). These results demonstrate the general applicability of our BMU strategy. \n\n**Q2. The setting of two momentum encoder branches(local and global) is too complicated and the gains from global branch in Table 1 seem not significant.** \\\n**A2.** Thanks for pointing this out. We can see from Table 1 that BMU-MoCo (local) is indeed effective since it beats all the competitors. On top of BMU-MoCo (local), the global momentum encoders are used to further verify the effectiveness of BMU (i.e., updating encoders with momentum encoders to review past knowledge). This is supported by the fact that BMU-MoCo (local+global) performs better than BMU-MoCo (local). We believe that the gains from global branch over BMU-MoCo (local) are really remarkable since the results obtained by BMU-MoCo (local) are already good enough.\n\n[a] Yarats, D., Kostrikov, I., \\& Fergus, R. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. ICLR 2020.", " Thank you for the constructive comments and suggestions.\n\n**Q1. The proposed method is only tested on the proposed benchmark. The authors may want to first make the proposed benchmark solid, or to show effectiveness of the proposed method on other continual learning benchmarks.** \\\n**A1.** Thanks for pointing this out. To our best knowledge, we have for the first time studied the continual video-language modeling (CVLM) setting and built a benchmark for CVLM which contains five video-text datasets (with 500K video-text pairs in total). In fact, the workload of constructing such benchmark is extremely heavy, since we have to re-implement 6 recent state-of-the-art continual learning competitors for CVLM along with our newly proposed BMU-MoCo and train all of them on five video-text datasets. Concretely, the total training time on such benchmark (with 6 baseline methods and 2 proposed methods) is around 7 days with 8 Tesla V100 GPUs (i.e., 2 months with 1 Tesla V100 GPU). In addition, as the reviewer suggested, we also apply the proposed BMU-MoCo to other continual learning settings such as unsupervised continual learning (UCL) to further verify its effectiveness. Concretely, we give the results of UCL for image classification (under the same setting as in LUMP [a]): \n\n| Method | Cifar-10 | Cifar-100 | Tiny-ImageNet |\n| -------- | ---------:| ---------:| -------------:|\n| Finetune | 90.11 | 75.42 | 71.07 |\n| DER [b] | 91.22 | 77.27 | 71.90 |\n| LUMP [a] | 91.00 | 82.30 | 76.66 |\n| BMU-MoCo | **92.80** | **83.81** | **77.45** |\n\nWe can observe that our BMU-MoCo is still effective under the UCL setting and outperforms all competitors (their results are directly copied from LUMP [a]). We have additionally provided the results of UCL in Table 4 of the supplementary material. \n\n**Q2. Equation (16) between Line 185 and 186: how is this contrastive loss implemented? Typically, such cross-entropy loss is implemented with log-softmax function to stabilize the computation. However, in Equation (16), it seems not straightforward to use log-softmax to compute this loss. If computed by softmax and then log, is it stable?** \\\n**A2.** Equation (16) is actually the log-softmax loss implemented as \"softmax and then log\" in Pytorch, which is the cross-modal version of the original contrastive loss in the well-known MoCo [c]. To show that this contrastive loss is indeed stable, we plot the training process (i.e., the change of contrastive loss) in Figure 2 of the supplementary material.\n\n**Q3. Line 290, \"even outperforms ER-ring using 10\\% memory (about 200GB under our CVLM setting)\". I'm curious that how this baseline that requires 200GB memory was implemented. Could the authors provide more details on the implementations of all these baselines. From my perspective, it would be a better contribution if the authors can make the CVLM benchmark and its baselines solid, compared with the newly proposed method.** \\\n**A3.** Sorry for the confusion. As stated in Lines 227--230 of our main paper, all rehearsal-based competitors (including ER-ring) have 10K stored video-text pairs as their memory buffer. This buffer requires 200 GB to save these extra data pairs. Moreover, as the reviewer suggested, we have added more implementation details of all these baselines in Section 4 of the supplementary material.\n\nWe wish that our response has addressed your concerns, and turns your assessment to the positive side. If you have any questions, please feel free to let us know during the discussion stage. We appreciate your suggestions and comments! Thank you very much!\n\n[a] Madaan, D., Yoon, J., Li, Y., Liu, Y., \\& Hwang, S. J. Representational continuity for unsupervised continual learning. ICLR 2022.\\\n[b] Buzzega, P., Boschini, M., Porrello, A., Abati, D., \\& Calderara, S. Dark experience for general continual learning: a strong, simple baseline. NeurIPS 2020.\\\n[c] He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Momentum contrast for unsupervised visual representation learning. CVPR 2020.", " The authors propose a new continual video-language modeling (CVLM) setting, where models are supposed to be sequentially trained on five widely-used video-text datasets.\nThe authors propose the BMU-MoCo, a cross-modal MoCo-based model with a novel bidirectional momentum update (BMU) strategy.\nTo further boost our BMU-MoCo, a pair of global momentum encoders are maintained by the same BMU strategy.\nThe proposed method is tested on the proposed benchmark, and shows promising results. Strength:\nOn the newly promised benchmark CVLM, the authors implemented several previous well-known baselines. This is a good effort. \n\nWeakness:\nThe proposed method is only tested on the proposed benchmark. The authors may want to first make the proposed benchmark solid, or to show effectiveness of the proposed method on other continual learning benchmarks. 1. Equation (16) between Line 185 and 186: how is this contrastive loss implemented? Typically, such cross-entropy loss is implemented with log-softmax function to stablize the computation. However, in Equation (16), it seems not straightforward to use log-softmax to compute this loss. If computed by softmax and then log, is it stable?\n\n2. LIne 290, \"even outperforms ER-ring using 10% memory (about 200GB under our CVLM setting)\". I'm curious that how this baseline that requires 200GB memory was implemented. Could the authors provide more details on the implementations of all these baselines. From my perspective, it would be a better contribution if the authors can make the CVLM benchmark and its baselines solid, compared with the newly proposed method. Yes. ", " This paper incorporates the video-language modeling into continual learning. To overcome the challenges in continual learning, this paper proposes a novel cross-modal MoCo-based model with bidirectional momentum update (BMU). strengths: \n1) This paper provides the a new continual video-language modeling setting.\n2) The extensive experiments demonstrate the effectiveness of the proposed method.\n\nweaknesses: \n1) This paper focuses on the video-language modeling in continual learning, however, the experiments only conducted on the text-to-video retrieval task. Since the proposed framework is two-stream architecture and is suitable for adjusting to the other video-language tasks, the experiments could be further investigated into multi-task transfer learning[1] in VLM domain, instead of on the single task.\n2) The setting of two momentum encoder branches(local and global) is too complicated and the gains from global branch in table1 seems not significant.\n\n[1] A deep hierarchical approach to lifelong learning in Minecraft, AAAI17 Could the author conducted more experiments on the multi-task transfer learning part? yes", " The authors introduce a new setting of the Video-Language Modeling task in a continual learning scenario. They propose an evaluation schema based on text2video (cross-modal) retrieval with 5 different tasks which correspond to separate popular video-language modeling datasets. Moreover, the authors propose their own method, namely BMU-MoCo, based on MoCo self-supervised representation learning approach, and compare it against other continual learning methods. The proposed method gives the best results given the evaluation metrics that measure the performance degradation between consecutive tasks, particularly ‘Forgetting rate’ and ‘Harmonic Mean’. Strengths:\n1.\tThe Paper is well-structured, and the contribution is clearly stated and supported in the experimental section.\n2.\tThe authors propose a new approach to evaluate the effectiveness of VLM models in a continual learning setup with new metrics to evaluate the task.\n3.\tThe proposed method gives the best results among other continual learning-based approaches.\n\nWeaknesses:\n1.\tThe method section is a bit messy and becomes hard to follow.\n2.\tNotations in 3. are definitely over complex, and could be simplified, e.g. by dropping ‘i’ in Eq. 1-6.\n3.\tThe method is not entirely free from additional resources contrary to what the authors claim, since it requires an additional model (momentum encoder- local + global) to be trained and stored, as well as the queue for the training.\n4.\tSome typos and minor editing issues.\n5.\tThe conclusions on the m hyperparameter are unclear.\n6.\tBoth figures 1 are non-informative and confuse the reader.\n7.\tLack of clear motivation why CVML among other cross-modal tasks and evaluation only on one of them seems to be quite limited.\n In Figure 1a) what do colors represent? What are the current models and the final model?\n\nCould you explain in detail what happens when the queue size Nq is larger than Nb?\n\nWhat does FR for Task 1 indicate? Is there a Task 0 then? The comparison is not clear.\n\nIs 0.5 additional GB of memory for BMU-MoCo local only or both?\n\nHow are frames sampled and fed to ViT and how is the averaging over the whole video being performed?\n The authors very briefly mention the limitations of their work, with no discussion of potentially negative impact. In my opinion, the limitations of this work are two-fold. First, as the authors mention, they only tackle the CVML task, however, to fully address this task, the results of state-of-the-art approaches on particular datasets should also be included, showing that they indeed struggle with catastrophic forgetting. Otherwise, it would be beneficial to address other cross-modal tasks." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "nips_2022_H5z5Q--YdYd", "Hclm6svFN2WN", "xPfmENZFPc", "anXLYgLcawF", "lSUq70lL6-8", "3uxolWR4eHh", "H3SiN9K4NZ_", "pwAgPw6jSiY", "wFUi9weFn0h", "H3SiN9K4NZ_", "H3SiN9K4NZ_", "srJqej-zfHT", "pwAgPw6jSiY", "nips_2022_H5z5Q--YdYd", "nips_2022_H5z5Q--YdYd", "nips_2022_H5z5Q--YdYd" ]
nips_2022_GAUwreODU5L
GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images
As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident. In our work, we aim to train performant 3D generative models that synthesize textured meshes which can be directly consumed by 3D rendering engines, thus immediately usable in downstream applications. Prior works on 3D generative modeling either lack geometric details, are limited in the mesh topology they can produce, typically do not support textures, or utilize neural renderers in the synthesis process, which makes their use in common 3D software non-trivial. In this work, we introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high fidelity textures. We bridge recent success in the differentiable surface modeling, differentiable rendering as well as 2D Generative Adversarial Networks to train our model from 2D image collections. GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings, achieving significant improvements over previous methods.
Accept
The paper proposes a generative model for synthesizing textured 3D meshes given only a collection of 2D images. The paper has received overwhelmingly positive reviews. Many reviewers find the idea interesting, the paper well-written, the results compelling, and the experiments comprehensive. The rebuttal further addressed the concerns such as camera poses and missing comparisons. The AC agreed with the reviewers’ consensus and recommended accepting the paper.
train
[ "JTbtH5FEXg", "_W9jeqEu10", "IGbMrHNj7_", "1Yt5-YGW0_", "4y8gR9uaBQ", "E2h7HsKveZ", "jN2KkfNEedU", "mEI_HhX42Bz", "bozwr55AvRr", "wNqjHJ7XPNE", "opUHMgrgGJ_", "QCKD5SL9ZmE", "JUdeoF7_cMs", "I2X9p8gxZt", "ZzGCMQ-7anQ", "Mcwlix9NNcg" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the feedback! \n\nThat's really great point to interpolate the geometry code and texture code individually! We provide two such results in our revised main paper (see Sec 6.5, Fig 16 & 17). Please check it.\n\nFor each figure, at every row, we interpolate the geometry latent code while fixing the texture latent code, and at every column, we interpolate the texture latent code while fixing the geometry latent code. These results demonstrate that our model is not only capable of decently disentangling geometry and texture during generation, but also can generate meaningful interpolation for either geometry or texture.\n\nWe thank the reviewer for providing this suggestion! If the reviewer has any further questions or suggestions, we are more than happy to take them. Thank you!", " Dear reviewr, \n\nThank you for providing valuable comments. The authors have provided detailed responses to your comments. Has the response addressed your major concerns?\n\nI would appreciate it a lot if you could reply to the authors’ responses soon as the deadline is approaching (Tues, Aug 9). \n\nBest, \n\nACs\n", " Thank you for the detailed response and the feedback is satisfactory. Fig 6 makes more sense now.\n\nJust wonder what the results will look like if you interpolate the shape code and texture code individually. I believe this could further improve the paper (not a must-do).", " I checked new experimental results. It is good to see that the proposed method works well in a more realistic experimental setting (e.g. natural images, inaccurate supervision).\n\nThank you for telling the qualitative evaluation of the decomposition in the supplement. It seems to be a good result.\n\nThanks for explaining the difficulty of generating meshes from neural fields. I agree that it would be difficult get exact SDF with neural fields.\n\nI have read the comments of other reviewers and found no serious problems. They all seem a little concerned that each component is not technically novel.\n\nThanks for the feedback. Some of my concerns have been addressed. This is a good paper and I hope it is accepted.", " We thank the reviewer for appreciating our paper and additional experiments!\n\nRegarding the unknown camera intrinsics, we thank the reviewer for pointing out the very interesting adverse effects of approximating the focal length change through the change in camera pose. It is indeed a very hard task to learn intrinsics, extrinsics, and 3D geometry from image data since they are highly entangled. Assuming the same intrinsics is a crude approximation that might help reduce the problem (though we agree it doesn't solve it). To tackle this highly-entangled problem, it might be better to utilize some priors from 3D (e.g. human face has a particular distribution of geometry) to regularize the solution space. In such cases, one would ideally have a dataset in which the same person would be captured with different cameras (intrinsics & extrinsics), making the disentanglement of individual factors easier.", " Dear Reviewers,\n\n\nThank you again for your thorough reviews. We would like to kindly remind you that the author-reviewer discussion period ends on Aug. 9 (in 2 days). We would appreciate it if you could have a look at our replies and let us know if they address your questions satisfactorily or if there are any further follow-up questions.\n\n\nWe would appreciate any additional feedback and would be happy to discuss it further.\n\n\nThank you very much,\n\nThe authors", " Thank you for the updated materials and response. It is assuring to see the results with noisy camera poses, as well as the hyperparameter specs for different categories. \n\nOn the intrinsics though, converting images with different focal length, even without lens distortions, to images with the same intrinsics but different 'pose' is actually only an approxiamtion, where the depth ranges of the object should be limited. It's probably less perceptile to human when the object is cars and bikes, but for faces, which we are super good at telling minor differences, those effects are quite visible[1].\n\n[1]http://www.danvojtech.cz/blog/2016/07/amazing-how-focal-length-affect-shape-of-the-face/\n\nThis is not an objection to the response, just merely a discussion on the matter of intrinsics. I don't think this is a huge issue for this paper though, but it is harder than it seems when the camera intrinsics are unkown.", " We thank all the reviewers for their insightful reviews. Before addressing the specific questions in the individual replies, we provide a short description of the additional experiments that we have added at the end of the original submission (marked with blue font). Specifically, we aim to address the common concerns regarding the camera poses, 2D silhouettes, real images and comparison with EG3D on human characters. To this end, we provide four additional experiments:\n\n1. **Noisy camera poses:**\nIn this experiment, we randomly perturb the camera poses with Gaussian noise during training on the ShapeNet cars category. Adding camera noise harms the FID, while qualitatively we don’t observe significant differences in the results of our original model. This result demonstrates the robustness of our method to a moderate level of noise in the camera poses. \n\n2. **Imperfect 2D silhouettes:**\nTo imitate how one might obtain 2D segmentation masks in real-world applications, we replace the ground truth silhouettes of the ShapeNet cars category with the ones obtained from Detectron2 using a pretrained PointRend model. We observe a drop in the FID scores, while qualitatively the results are hard to distinguish from the ones generated by our original model. This shows that GET3D is not overly sensitive to imperfect masks obtained from a pre-trained 2D segmentation network. We would also like to note that GET3D can/will benefit from future advances in 2D image segmentation, while methods that require full 3D supervision are hard to reduce their supervision cost. \n\n3. **“Real” Images:**\nSince many real images lack camera poses, we follow GANverse3D [56] and utilize pretrained 2D StyleGAN to generate realistic images with good camera initializations and 2D silhouettes. Our method performs decently well in this dataset and generates reasonable 3D textured meshes, demonstrating the potential applicability to real-world data. \n\n4. **Comparison with EG3D on Human characters:**\nFollowing reviewer 9TSe’s suggestions, we additionally train EG3D on the human character dataset and compare it with our method. In this setting, the models are tasked with generating the entire human body including the clothes. GET3D significantly outperforms EG3D in terms of 3D shape synthesis quality (FID-3D) and achieves comparable performance on multiview 2D image generation (FID-Ori). \n\nIt was demonstrated in the original submission that GET3D can generate 3D textured shapes of excellent quality and can be applied to several applications, including material decomposition and text-guided shape generation. These additional results should further support its position as a versatile SoTA generative model of high-quality textured 3D shapes with strong robustness to imperfect inputs.\n", " We thank the reviewer for the positive feedback and for perceiving our method as a significant step toward realistic 3D content generation models. Below, we reply to individual comments and questions raised by the reviewer:\n\n1. **Novelty:** While we acknowledge that individual submodules of GET3D might not be technically novel on their own, the novelty of our work lies in devising a proper solution to tackle a practically important research problem. Our paper is motivated by practical problems in 3D content creation and we address many limitations of previous work, by smartly combining the modules, which exploit the recent advances in differentiable surface extraction, highly efficient differentiable rasterization, and GAN-based generative models. As a result, GET3D is not only the first method that achieves such high-quality 3D assets generation, but it is, to the best of our knowledge, also the first 3D generative model that is able to directly output textured meshes with arbitrary topology. We believe that our model opens a new door for 3D generative models and can motivate many follow-up works along this line.\n\n2. **Supervision cost:** To alleviate the reviewer’s concerns, we have tried to imitate how people would collect the dataset and train GET3D in the real world. To this end, we ran three additional experiments to verify the robustness of our model (See quantitative and qualitative results in Sections 6.1 & 6.2 & 6.3 of the main paper). In particular, we add Gaussian noise to the camera parameters, use a 2D segmentation network to obtain 2D masks and train on StyleGAN generated realistic dataset following GANverse3D [56]. In these challenging settings, GET3D still performs robustly and achieves comparable qualitative results as in our main evaluation. These results also imply that GET3D is a good candidate for 3D textured shape generation from real-world images — an application that we will pursue in our future work. The supervision cost for the training on the StyleGAN dataset is significantly lower than methods that require 3D supervision. \n\n3. **Room for improvement:** We agree with the reviewer that there is still room for improvement and we wouldn’t dare to assume/imply that GET3D has solved the field of the generative modeling of 3D textured shapes. However, we do believe that our method made a significant step in this direction and that our performance is significantly better than other baselines and creates a new SOTA for generating 3D textured meshes. \n\n4. **High-resolution scene:** We agree that DMTet itself has the limitation of representing large-scale scenes. However, one potential solution to mitigate this problem is to combine DMTet with a scene graph representation [a], where we can use the scene graph to represent spatial relations between individual objects and each object can be represented with DMTet. This also opens several interesting opportunities for compositionality, where assets from different datasets could be combined to generate novel scenes. Developing a 3D generative model for scenes is definitely an interesting future direction for our method. \n\n5. **Quantitative results of material generation:** We provided a quantitative comparison in the Supplementary Material (Section D, Table C), where we evaluate the realism of the 2D rendered images under real-world lighting using the FID scores. Results indicate that the material generation has better capacity and improves realism compared to the texture baseline. \n\n6. **Difficulty to extract meshed textures from neural fields:** Yes, the difficulty exists when using volume rendering to render the neural fields. In particular, when volume rendering is utilized, it has ambiguities in defining the underlying texture field [b,c] and the texture will not lie exactly on the surface, this is also the reason why people often see foggy results in Nerf [b,c]. Theoretically, when SDF is used, it will not have this issue. However, practically, people often parameterized SDF using neural network, and the generated “SDF” is not guaranteed to be the exact SDF (e.g. one typical approach is utilizing Eikonal regularization on generated SDF [44], but it still can not guarantee the output is a valid SDF), therefore, many papers are still utilizing volume rendering to render the generated “SDF” and they will have the same ambiguity problem as above. On the contrary, our method does not have this difficulty since we directly render the surface points (L152-L156) and the texture is defined exactly on the object’s surface, despite the fact that we generate a texture field. \n\t\nIf there are any further concerns, we would be happy to address them in further discussion. Thank you!\n\n[a] Neural Scene Graphs for Dynamic Scenes\n\n[b] NeRF++: Analyzing and Improving Neural Radiance Fields\n\n[c] Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction.\n", " We would like to thank the reviewer for the positive and detailed feedback and for perceiving our results as convincing and sufficient. We are also happy that the paper came across as well-written and easy to follow. \n\nBelow, we reply to individual comments and questions raised by the reviewer:\n\n1. **UV Map:**\nThis is a great point and we should have made it more clear that, with *texture*, we don’t mean colored vertices, but rather an actual 2D texture map. As briefly described in footnote 1 (Page 7), the UV map can easily be obtained from our representation in the following manner: i) we first use the xatlas [54] package to obtain the texture coordinates of the extracted mesh from the DMTet layer, ii) using these texture coordinates, we can unwarp the 3D mesh onto a plane and obtain the corresponding 3D location on the mesh surface for any position on the 2D plane, we then iii) discretize the 2D plane into an image, and for each pixel, we query the texture field using the corresponding 3D location to obtain the RGB color. The obtained image represents the UV texture map. Note that this process only happens during inference when we export the generated textured mesh, and not during training. \t\n\n\n2. **Comparison to [46]:**\nWe mainly focused on comparing methods that are capable of generating shapes with varying topologies, as we see this as a crucial component for high-quality 3D asset generation. While [46] can indeed generate textured mesh, its mesh is obtained by deforming a sphere, which means that the topology is fixed to genus 0 and it is thus hard for [46] to represent complex objects (with higher genus surfaces), like motorbikes. Qualitatively speaking, the results in our paper are significantly better than the results reported in [46]. \n\n3. **Illumination model:** We provide details of our illumination model in Supplementary material (Section D). Specifically, we adopt the Disney BRDF and represent the lighting with 32 Spherical Gaussian lobes. Our illumination model can hence represent more complex lighting effects than the Phong model. \n\n4. **Volume subdivision:** We found subdivision only improves our results on categories with thin structures (e.g. chair, motorbike) and hurt performance slightly for other categories, in contrast to [51]. We hypothesize that this is related to the loss function and supervision signal. [51] use full 3D supervision, while we only have supervision on rendered masks/images. One possibility is that the rasterizer could be affected by having too many sliver triangles after subdivision, leading to numerical instability in gradient computation. We will tone down our claim and plan to investigate this problem in future work.\n\n5. **Interpolation:** we apologize for the confusion. In Fig. 6, we interpolate the latent code for both texture and geometry. Therefore, both of them are changed during interpolation. We have fixed it in our revised version. \n\nIf there are any further concerns, we would be happy to address them in further discussion. Thank you!\n", " We would like to thank the reviewer for the positive feedback and for appreciating that our generated shapes can be directly used in general 3D render engines, and the supervision in 2D images is more wildly available than 3D.\n\nBelow, we reply to individual questions and comments raised by the reviewer:\n\n**1. Supervision & Applications:**\n\nFirst, we hope to clarify that our model only assumes known lighting **distribution** for the training dataset and **NOT** the GT lighting of each image (See Supplementary Material Section D). For the camera pose of each image, we demonstrate that our model works decently without knowing the exact camera poses of each image in Section C.3.2 of the Supplementary Material. Furthermore, as mentioned in our general reply to all reviewers, we conducted an additional experiment in which we used noisy camera poses during training and achieved qualitatively almost indistinguishable results (Section 6.1). We also conduct an additional experiment on training GET3D on StyleGAN generated realistic dataset following GANverse3D [55] and achieve decent 3D generation with rough cameras and imperfect silhouettes. These experiments demonstrate the robustness of our method and we believe that our model makes an important step towards training 3D generative models purely from real-world images. \n\n\nWe also respectfully disagree with the reviewer on the conclusion that our application scenario is limited. In our work, we have already shown two applications of our generative model: i) learning surface materials, which enables us to relight the generated 3D assets, and ii) text-driven 3D shape generation. The network architecture of GET3D also lends itself nicely to other applications such as: single image 3D reconstruction, shape completion, shape editing, etc. which we plan to revisit in our future work.\n\n\n**2. Local details on human character generation:**\n\nWe wish to point out that it is not fair to us to only focus on faces and compare our human character generation results to results reported in EG3D [8]. In our experiment (Fig 1 & 5), our model is trained to generate the **whole** human body including clothes, shoes, legs, hands, etc, while EG3D **only** generates human faces without other parts of the body. We hope that the reviewer agrees that the first is a much more challenging setting. Nevertheless, we have added an experiment in which we tried to make the comparison fair, by training EG3D on the same dataset as GET3D and tasking it to generate the whole human body (see Section 6.3). Our model performs significantly better than EG3D in terms of generating more accurate and diverse 3D shapes with textures. Similarly, we observe that GET3D consistently generates higher quality and more diverse 3D shapes than EG3D [8] on other datasets/categories (see Table 2 and Fig. 3). \n\n\n\n**3. Typos:**\n\nThanks for pointing out typos, we have now fixed them in our revised version. \n\nIf our reply successfully addresses the concerns of the reviewer, we would like to kindly ask the reviewer to consider raising their score accordingly. If there are any further concerns, we would be happy to address them in further discussion. Thank you!\n", " We would like to thank the reviewer for the detailed feedback and for appreciating the quality of our generated shapes. We are glad that the reviewer finds our paper to be high-quality, well-written, and easy to follow. \n\nIn the following, we reply to individual questions and comments raised by the reviewer:\n\n\n**1. Robustness to distribution shifts of camera poses:** \n\nOur method does not require an exact camera pose for each training image. The supplementary material includes an ablation study (Section C.3.2) that compares using exact camera poses for each image in the training data with sampling from the distribution of camera poses. There is almost no difference in the quantitative results, indicating that the exact camera pose is not necessary. In fact, the main reason to condition our discriminator on the camera pose is that it simplifies the computation of the 3D evaluation metrics on ShapeNet where all the shapes are represented in their canonical pose. \n\n To further demonstrate the robustness of training with noisy camera poses, we conduct an additional experiment by randomly perturbing the camera with Gaussian noise as described in the general response to all reviewers. Quantitative and qualitative results of this experiment are provided in the revised version (see Section 6.1 in the main paper). The qualitative results are close to the model trained with ground truth camera poses, demonstrating the robustness to a moderate level of distribution shift of camera poses.\n\n\n**2. Different intrinsics:** \n\nThis is a very good point and we agree with the reviewer that the real-world images might be captured with different intrinsics. However, one approach that can help mitigate this problem is to learn a “virtual camera” by assuming all the images are captured using a camera with the same intrinsics but vary in camera poses. This idea has been explored in [a] and EG3D [8], and could easily be plugged into our method.\n\n**3. Training stability:** \n\nThanks to the R1 regularization [35] applied to the discriminator (Eq. 1) the GAN in GET3D is quite robust in our experiments. In our work, we have experimented with 6 different categories and they all work well without any changes to model architectures. In fact, the only hyperparameter we changed is $\\lambda$ for R1 regularization (Car: 40, Animal: 40, Motorbike: 80, Human 80, House: 200, Chair: 3200). To enable fair comparisons and easier reproduction of our results, we will also release the source code if the paper is accepted. \n\n\n[a] Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild\n\nGarrick Brazil, Julian Straub, Nikhila Ravi, Justin Johnson, Georgia Gkioxari", " This paper proposes a 3D textured mesh generator trained using 2D images, known camera distributions and object silhouettes. The generator is composed of a geometry generator, which generates the SDF value and vertice deformations to a tetrahedron field. The mesh is then extracted using differentiable marching tets. The texture generator generates a texture field, which predicts the reflectance of a point in space using its coordinates and the latent codes. By differentialbly rendering them into images and sillhouettes, image-based discriminator is used to provide the signal for training. \nIn addition, the authors also presented two variants, first is generating the material maps instead of reflectance, where the generated objects can be relit. Second, using a clip discriminator, the generator can be tailored to perform the task of text-to-textured-mesh. Strengths:\nOriginality:\n\nEven though the core compnent of the system is based on prior works(nvdffrast, tet marching), the system itself is novel and nontrivial to setup. As far as I know, the proposed system is the first of its kind to generate high-quality textured mesh. \n\nSignificance:\n\nThe proposed system tackels multiple drawbacks of previous results; it uses mesh as the final geometric representation, but also not constrained by topology nor utilizes a template mesh. The quality of the result is also appealing, making it a promising candidate for generating assets for graphics applications. \n\nQuality:\n\nThis paper is of high quality. In addition to technical details, this paper presents multiple experiments with two variants of the method to support the validity of their design choices and the potential of the method. \n\nClarity:\n\nThis paper is well written and easy to follow. In general I find this paper above the acceptance bar. The following questions are intended to elicit more insights and details of the proposed methods:\n\n1. How robust is the proposed method to distribution shifts of camera poses? Since it samples from the distribution of camera poses, does it really matter to have access to the exact camera pose, which seems to be a limitation of the method to apply to real images?\n\n2. In addition to camera pose, there's also the matter of camera intrinsics. For real images, those are usually unknow and varying as well. Will the method robust enough to handle that?\n\n3. How delicate is the system to train successfully? Though GANs are hard to tune overall, it would be nice to have some insights on how easy it is for others to re-train the system on different categories. The authors provided adequate discussions on the limiation of the method.", " This paper proposes a generative method to produce textured 3D mesh that can be directly used in general 3D rendering engines. The overall pipeline contains a geometry generator, a texture generator, a differentiable rendering module, and two discriminators. The geometry generator exploits DMTet as surface representation, and a tri-plane texture field is used for the texture generator. Besides, the differentiable module is used to predict the 2D silhouette and 3D on-surface coordinates that are used to query the corresponding RGB color in the texture field. The two discriminators are inspired by StyleGAN for silhouette and rendered RGB images to achieve RGB-supervised training, respectively. In general, this paper is a combination of DMTet, texture field, NVdiffrec, and StyleGAN. While the performance excell SOTA method, the novelty is limited. The strength of this work is the generated texture mesh can be directly used in general 3D rendering engines, which is not suitable for previous works. Meanwhile, the supervision is 2D images that are more widely available than 3D geometry data. \nThe weakness of this work contains:\n1. Although this work only requires 2D images as supervision, all experiments are trained with rendered RGBs with known camera parameters and light variation. The application for in-the-wild RGB images needs more discussion. Hence the application scenario of this method is limited. \n2. The author claims their method can generate high-quality 3D textures meshes, but local details (e.g. face) of the human character are missed (as in Fig.1, Fig.5). A qualitative comparison of local detail with previous SOTA methods (e.g. EG3D) will make this claim more valid. \n3. The lack of quantitative comparison of human character. \n\nTypo: \nP7, L222, \"Tbl. 2\". \nSupp. P3, L101, \"Fret\"\n Please provide some discussion about the weakness Yes ", " This paper proposed GET3D, a generative model that synthesizes high-quality 3D meshes with textures.\nThe authors put the recent 3D shape reconstruction work [40] into a styleGAN architecture and demonstrated that this framework is capable to generate diverse 3D meshes with various textures. The experiments on multiple datasets show the proposed method is superior to the prior arts. The writing is clear and the idea is easy to follow.\n\nThe authors closely followed the most recent 3D mesh reconstruction work nvdiffrec [40] and developed further to incorporate the 3D reconstruction pipeline into a GAN. I am glad to see that without many bells and whistles, the proposed GET3D works pretty well on multiple datasets.\n\nThe experiments are sufficient overall, though I do have some minor comments.\n\nWeakness:\n1. It is great that the generated 3D textured meshes can be directly used in the production tools like Blender. I wonder how the texture is represented in the final output as I don't think these tools can utilize texture fields? Is it easy to get the UV map from the proposed model? Or does \"texture\" only mean \"colored vertices\"?\n2. Based on Table 1, [46] actually is closer to what the authors proposed in this paper (without the texture part though). Why [46] is excluded from the comparison?\n3. What is the illumination model in the differentiable rendering part? If I understand correctly, it is a Phone model with only 1.0 ambient lighting.\n4. Though in L255, the authors claimed that \"hence the subdivision cannot provide further improvements\", in Table 2, subdiv actually hurts the performance seriously. Any idea?\n5. Shape interpolation is impressive. It would be great to see the texture interpolation as well (though the title of Fig 6 is shape interpolation, the texture has also changed...why...). I do not have major concerns. Just some minor questions and comments as listed in the weakness section.\n\nOverall I think this is a high-quality paper. The idea is straightforward and the results are convincing. Though I have some minor concerns, I lean into accepting this paper. The authors listed two major limitations in their work: 1. the evaluation was done on the synthetic data only and 2. per category training. The authors also discussed the potential social impact of their work applying on privacy or biased data. All these sound sufficient to me.\n", " - A generative model of 3D models for 3D content creation should output a mesh of detailed geometry and any topology and must be trained using 2D images without 3D supervision. The proposed method is the first that has these properties. The proposed realizes it by combining a 3D representation of meshes with any topology (DMTet) [51] (S.3.1.1), texture fields [43] (S.3.1.2), differentiable rendering [31], and StyleGAN [29] (S.3.2).\n- The experiments conducted on ShapeNet, Turbosquid, and Renderpeople demonstrate that the variety and quality of generated 3D models by the proposed method are better than existing 3D mesh generation models and 3D-aware image generation models. Also, the proposed method is applied to (1) unsupervised decomposition of geometry, light, and material, and (2) text-guided 3D model generation. #### Strength\n\n- The proposed method can handle topology changes as it uses DMTet as a 3D representation, different from The existing generative model of meshes [46]. It is a significant step toward realistic 3D content generation models.\n- Comparison with existing 3D generation methods [46,11] are conducted on multiple datasets, and quantitative improvement is certain in the geometry diversity and visual quality metrics. Also, it is a bit surprising that the proposed method generates better images than recent 3D-aware image generation models inspired by NeRF.\n- Qualitative results are impressive and clearly outperform the baselines. Improvement of texture is explained by the image resolution in training (Table 3), which is also an interesting finding.\n- Though no quantitative comparison is provided, the quality of text-guided 3D model generation is significantly better than Text2Mesh [37].\n\n#### Weakness\n\n- The proposed method is a straightforward combination of existing methods [51,43,31,29], and each component is technically not novel, although what is realized by the combination is novel.\n- Training the proposed method requires the distribution of camera parameters and silhouette images, so it practically requires a synthetic dataset of 3D models, as noted in l.300. Therefore, the cost of training datasets is still not very lower than existing methods that require 3D supervision.\n- The generated shape and texture look a bit coarse when zoomed up, though they are finer than [46,11].\n- DMTet is not very good at representing high-resolution scenes as it stores information in 3D grids. Therefore, it is difficult by the proposed method to generate a large-scale scene containing multiple objects, such as bedrooms, unlike 2D GANs.\n- The result of unsupervised geometry, lighting, and material decomposition look appealing, but I am not very sure the proposed method is actually good because quantitative comparison is not provided.\n\nAlthough the technical contribution is not very strong, I recommend acceptance because of the impressive results shown in the experiment. It is written that it is difficult to extract meshed textures from neural fields (l.45, l.85), but what exactly is the difficulty? That is understandable when volume rendering is used, but when SDF is used, I guess the difficulty is the same as the proposed method using texture fields. It is described that it is difficult to apply the proposed method to natural datasets because the distribution of camera poses and silhouettes images are assumed to be known. No other serious limitations were found." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "IGbMrHNj7_", "opUHMgrgGJ_", "wNqjHJ7XPNE", "bozwr55AvRr", "jN2KkfNEedU", "nips_2022_GAUwreODU5L", "QCKD5SL9ZmE", "nips_2022_GAUwreODU5L", "Mcwlix9NNcg", "ZzGCMQ-7anQ", "I2X9p8gxZt", "JUdeoF7_cMs", "nips_2022_GAUwreODU5L", "nips_2022_GAUwreODU5L", "nips_2022_GAUwreODU5L", "nips_2022_GAUwreODU5L" ]
nips_2022_3vYkhJIty7E
Learning Optical Flow from Continuous Spike Streams
Spike camera is an emerging bio-inspired vision sensor with ultra-high temporal resolution. It records scenes by accumulating photons and outputting continuous binary spike streams. Optical flow is a key task for spike cameras and their applications. A previous attempt has been made for spike-based optical flow. However, the previous work only focuses on motion between two moments, and it uses graphics-based data for training, whose generalization is limited. In this paper, we propose a tailored network, Spike2Flow that extracts information from binary spikes with temporal-spatial representation based on the differential of spike firing time and spatial information aggregation. The network utilizes continuous motion clues through joint correlation decoding. Besides, a new dataset with real-world scenes is proposed for better generalization. Experimental results show that our approach achieves state-of-the-art performance on existing synthetic datasets and real data captured by spike cameras. The source code and dataset are available at \url{https://github.com/ruizhao26/Spike2Flow}.
Accept
This work is focused on the estimation of optical flow from a neuromorphic camera that produces Poisson spiking at each pixel with a rate governed by overall intensity. The authors use local space-time aggregation of spike-time differentials to identify features that are then corresponded via a convGRU decoder. The reviewers found the application interesting, and noted the good performance of the method. There were however a number of concerns about innovation and novelty of the method. Specifically the aggregating spikes to operate on point process data is a standard approach and the assessment of the spiking source of the data was not analyzed. Regardless of the similarity to past methods, overall the reviewers felt that the strengths of the paper, specifically the combination of methods brought together to solve a unique problem, outweighed the weaknesses. Thus I recommend that this work be accepted.
train
[ "rVNfHZ5rMRq", "SeOeHhGlL91", "2soTGHJ5-kq", "Z5OKEFLB0q", "twsT-ipE7mC", "noYzkM0WIHT", "lphZOpi2_Y", "MDafw_TfeLg", "Y9F1gWduQ0U", "vUvsI0ugtPV", "zRIyKakHpu5", "ZQEgAEmheSA", "bmdsOSNV9DW", "2WzeIT3cQxg", "qh_fMBqlHK-", "flzzpgQU92Z", "Zj-1hFl_Usf", "0bD6A6FEqk7", "_C7XeuntLTp", "DE16BQcVrQo", "GoxQM9UD0Tz", "HDlrCXrYgb6" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your precious time and valuable comments. Please let us know if you have any further questions, and we will be glad to discuss with you to provide further details about our work if you like.", " Thank you for your precious time and valuable comments. Please let us know if you have any further questions, and we will be glad to discuss with you to provide further details about our work if you like.", " Thank you for your precious time and valuable comments. Please let us know if you have any further questions, and we will be glad to discuss with you to provide further details about our work if you like.", " Thank you for your precious time and valuable comments. Please let us know if you have any further questions, and we will be glad to discuss with you to provide further details about our work if you like.", " We sincerely thank you for your raising rating and further discussion. Our answers to your additional questions are as follows.\n\n**1) - (a) Advantages of spiking camera.** \nSpiking camera has other advantages, such as high dynamic range, low latency, and high memory efficiency, which has been introduced in line 18 - 19 of our paper. \n\nFirstly, the dynamic range depends on the maximum and minimum signal that the device can record. In traditional camera, the maximum signal depends on the capacity of the photon well, and the minimum signal depends on thermal noise and reading noise. For the same device in traditional camera, the dynamic range is proportional to the capacity of the photon well. In spiking camera, the accumulation in the photon well will reset to zero conditionally. In a single reading cycle, the \"capacity\" of photon well of spiking camera is limited, but it increases with the recording time, i.e., the number of spike frames. Thus, the spiking camera has a **high dynamic range**, and there has been a method to use spiking camera to guide high-dynamic imaging for traditional camera [a].\n\nSecondly, in traditional cameras, all the pixels have the same exposure time, which is usually set far below the limit of the CMOS device to expose the dark areas and restrain the noise. In spiking camera, there is not a uniform exposure time, and we can set a **much higher frame rate** that is more close to the limit of the CMOS compared with traditional camera. The optical scenes are converted to spike streams at high speed. Thus, the spiking camera has **low latency**.\n\nThirdly, compared with traditional camera that has the same frame rate, spiking camera needs **much less memory and bandwidth** since the spike streams are sparse and binary.\n\nGiven the above-mentioned advantages, the spiking camera has broad application scenarios, especially in high-speed scenes. In the applications of spiking camera, estimating optical flow is crucial since it offers clues to the motion in the scene.\n\n[a] J. Han, et. al. Neuromorphic Camera Guided High Dynamic Range Imaging. CVPR 2020.\n\n**1) - (b) Real Dataset.** \nThere are experiments on real data that have been introduced in our answer to Weakness 2 of Reviewer zkLu ([Link](https://openreview.net/forum?id=3vYkhJIty7E&noteId=bmdsOSNV9DW)). As for the generalization of simulated spikes, we have introduced in line 164 - 168 of our paper that our method generalizes well on PHM and real data. Thanks for your suggestion, and we have included more descriptions of the domain gap with real spikes in our revised paper.", " **2) Topics of NeurIPS.** \nWe admit that camera and imaging related research works (like this one) may not be the mainstream in the scope of NeurIPS. However, we noticed that there are still some NeurIPS papers on neuromorphic cameras and other novel cameras (including event cameras, spiking cameras and polarization cameras) and related intelligent algorithms in recent years. For example, The work [a] implements high dynamic range imaging with data from **modulo camera**, and they also train the networks with synthetic data. It is notable that the work [a] implements modulo camera based on **spiking camera** that is studied in our paper to get new real data. The work [b] dehazes images with **polarization camera**, and they also train networks with synthetic data. The work [c] focuses on optical flow estimation for **event camera**. The work [d] studies object detection for high-resolution **event camera**. Thus, we think out paper is still relevant to the scope of NeurIPS.\n\n[a] C. Zhou, et. al. UnModNet: Learning to Unwrap a Modulo Image for High Dynamic Range Imaging. NeurIPS 2020. \n[b] C. Zhou, et. al. Learning to dehaze with polarization. NeurIPS 2021. \n[c] J. Hagenaars, et. al. Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks. NeurIPS 2021. \n[d] E. Perot, et. al. Learning to Detect Objects with a 1 Megapixel Event Camera. NeurIPS 2020. \n\n**3) The design of the model.** \nWe design the model based on the challenges and key advantages of spiking camera when estimating optical flow. The contributions are proposed according to the characteristics of the spiking camera. \n(1) DSFT estimates the firing rate of spikes to represent the light intensity. The firing rate fluctuates since the photon arrivals follow a Poisson process, and the motion of objects makes them projected on different pixels, making the firing rate change dramatically. We use continuous DSFT rather than the statistical term of firing rate to recover the dynamic process better. \n(2) As described in (1), the firing of spikes exhibits remarkable randomness since the photon arrivals follow the Poisson process. We propose SIA module to reduce the influence of the randomness of spikes in all feature extraction of spikes. After feature extraction, we consider the motion as a continuous dynamic process and construct a series of correlation volumes. The local correlations are encoded to a single hidden state for estimating optical flow of each moment with the reference of correlations of other moments. This JCD module further utilizes the continuousness of the spike streams in a longer time range to restrain the error in the flow estimation and reduce the influence of spikes' randomness.", " I raised my rating to borderline reject.\n\nHowever, I still feel a few limitations exist: 1) the problem setting is not fully motivated. What's the advantages of spiking camera? Only higher frame rate? The authors even don't have a real dataset. They merely converted Slow Flow (RGB videos) to spiking videos for evaluation (not to mention the converted videos may have a domain gap with real spiking videos). 2) Moreover, it seems that since the method is an adaptation on the new modality of spiking cameras, maybe it more suits an application-oriented conference like CVPR, instead of NeurIPS, which considers broader impact. 3) I still believe that the technical novelty (i.e., on the optical flow estimation model design) is rather limited. So I'll keep the rating at the negative side.\n", " the authors have answered my questions. I do not see a reason to change the paper's ranking", " Thank you for your response and insightful suggestions for our paper!\n\nIn the context of our paper, firing rate refers to the frequency of spike firing in a pixel. In other words, it indicates the **average** number of spikes fired within a certain period of time, i.e., a time window. This term is also connected with the **average** spike interval within that period. This is a **statistical concept**. In the ideal case, if the events of photon arrivals occur with a stable rate, using the \"firing rate\" and using the \"spike interval\" can be equivalent.\n\nHowever, in reality, the incoming photons follow a Poisson process. Therefore, the process of photon arrivals and the firing of spikes exhibit remarkable **randomness** so that the interval between each individual spike may fluctuate considerably. The existence of thermal noise in the circuits of the sensor may make the fluctuations even worse. Furthermore, we need to further consider the effect of **motion** on the frequency of spike firing. When the objects move rapidly in the scene, the image projected onto the sensor also moves quickly so that the light intensity of a certain pixel may change dramatically. In this way, inferring a \"fire rate\" from a time window does not make sense because we may combine the spikes triggered by different light intensities and blur the recorded dynamic light intensity changing process.\n\nIn our scheme, we use DSFT to extract the interval of each spike. We choose to keep all these **individual** spike intervals instead of calculating the **statistical** term of firing rate. In this way, we keep more information and detail so that the following network modules are able to recover the dynamic process better.\n\nWe have revised the corresponding descriptions in line 197 - 201 of our paper.", " Thanks for your clarifications and apologies for missing the results on real spiking camera data.\n\nRe 1c: Firing rate does not have to be a neural population characteristic. Firing rate is also a property of a single neuron, or pixel in your camera. If I understand the camera model well, the firing rate is directly related to the brightness. Your DSFT looks at the time between two subsequent spikes to estimate the firing rate of the single pixel. One could also measure the rate over a time window. Longer time windows would give more accurate results for a constant firing rate, but would lag more. My question was if this could be clarified in the text for future readers.\n\n\n", " Thank you for your comprehensive summary of our paper and its strengths. My answers to your questions are as follows.\n\n> 1. The introduction of the joint correlation decoding part.\n\nThere are two key points for the network design. One point is to extract efficient features from spikes, and the other point is to use the continuousness of the spike streams. The joint correlation decoding module is designed based on the consideration of the two points. The Spike2Flow network firstly constructs a series of correlation volumes corresponding to the moving procedure, and local correlations are looked up according to the currently estimated flow in each iteration. The JCD encodes the local correlations into a single hidden state for decoding the continuous flow fields. Jointly estimating the motion can constrain the solving for optical flow and reduce the fluctuations of spikes since similar data can counteract the randomness. More details of the JCD module can be found in Section B.1 and Fig. 12 of the pdf file in the supplementary material.\n\n\n\n>2. Why the decay factor is 0.8?\n\nThe iterative optimization architecture is common in current optical flow methods. When constructing the loss function, we give the highest weight to the output of the last iteration since it has passed through the most optimization and it's most reliable. 0.8 is an empirical value inherited from RAFT [a].\n\n[a] Z. Teed and J. Deng. RAFT: Recurrent All-Pairs Field Transforms for Optical Flow. ECCV 2020.", " Thank you for your summary, and we appreciate you for pointing out the strengths of our paper. My clarification and answers for the weaknesses and questions you summarized are as follows.\n\n**Answers to the Weaknesses**\n\n> 1. The encoding scheme seems quite straightforward. A spiking camera accumulates illuminance, which means that a bright pixel will have a higher firing rate. The proposed DSFT basically makes a local estimate of this rate, encoding the time between two subsequent spikes. This seems quite a naive method that may be sensitive to noise. Nothing is mentioned about this.\n\nThank you for your comments. The DSFT indeed makes a local estimate of the spike firing rate through the spike interval. It's natural and acceptable that the spikes are non-uniform and the DSFT transformations of spikes are fluctuating. The fluctuations and randomness are intrinsic characteristics of spiking cameras' output data since **the arrival of the photons follows the Poisson process**. We aim to represent the light intensity better when designing DSFT. The \"1\" in spike streams reflects the result of the photon integral but not the photon's arrival rate. Different \"1\" correspond to various light intensities, and using binary spikes to extract features for matching is inappropriate. We use DSFT associated with the arrival rate of the photons to represent the light intensities at each pixel better.\n\nAs for handling randomness and fluctuations, we design Spatial Information Aggregation (SIA) and Joint Correlation Decoding (JCD) modules. The encoder firstly extracts deep features from the DSFT of spikes. The SIA module aggregates long-range correspondence in each feature to reduce the influence caused by the randomness of spikes with non-local similarities. The JCD module jointly estimates a series of continuous motions by encoding the corresponding correlations into a single hidden state. The mutual utilization of multiple can counteract the randomness.", " **Answers to the Weaknesses**\n\n> 2. As far as I understand, the network is not applied to actual spiking camera inputs, but to a simulation of such a camera based on a normal (or high-speed) camera stream. The paper would be much strengthened by using actual spiking camera inputs.\n\nThank you for your concern about the real (actual) data captured from actual spiking cameras. **We had included the results** based on real data captured by actual spiking cameras **in the main paper, supplementary material (pdf and video)** when first submitting the paper. The table below shows 7 scenes of actual spiking camera inputs.\n\n| Index | Name | Description |\n| :---: | :---------- | :-------------------------------- |\n| (1) | Fox - A | A fox doll is shaked |\n| (2) | Poker | A poker card is falling |\n| (3) | Spining Top | Several spining tops are rotating |\n| (4) | Doll | A Peking Operal doll is falling |\n| (5) | Fox - B | A fox doll is shaked |\n| (6) | Fan | A fan is rotating at a high speed |\n| (7) | Leg | A leg is shaking |\n\nExperiments of results on the actually captured data appear in the following locations.\n\n- Fig. 6 in the main body of the paper shows comparable results on real data. The \"scene\" column is the temporal average of the spike since we think it's inappropriate to show a single frame of raw binary spikes. There are (1), (2), (3) scenes in the above table.\n- Fig. 15 in the pdf in supplementary material shows comparable results on real data. The meaning of the \"scene\" column is the same as that in Fig. 6. There are (4), (5), (6), (7) scenes in the above table.\n- The 0:10 - 0:26 of the video in supplementary material shows dynamic results on real data. Here we show the raw spike on the top-left corner of the video since the video is temporally dynamic. There are (1), (2), (7) scenes in the above table.\n\n\n\n> 3. RAFT and other sota methods are only applied on the modified image inputs (AvgImg, Spike). The performance on normal images from the slowflow data set are not mentioned.\n\nThank you for your helpful comment. Your comment reminds me that my statement may mislead the readers of the paper. We have revised the descriptions of the comparable methods in the paper.\n\nThe comparison in the paper is to show which method can achieve the best results in the optical flow estimation for spiking camera. There is an important **premise** that the **input** of all the methods is **spike streams**. The RAFT [a] method is designed for RGB images, while in this paper, the \"RAFT - Spike\" and \"RAFT - AvgImg\" are not RAFT but straightforwardly designed methods **for estimating optical flow from spike streams** based on key components of RAFT. So does \"GMA/SCV - Spike/AvgImg\". Our experiment is to show which method trained on RSSF for estimating optical flow **from spike streams** achieves the best performance. Only Spike2Flow, SCFlow and the methods designed straightforwardly are appropriate.\n\nWe realize that the original \"Method\" and \"Input\" columns in Tab. 1 and Tab.2 may mislead the readers, and we have removed the \"Input\" column in the paper. Besides, The \"AvgImg\" is a pre-processing method for spikes to transform the spikes into a gray-scale image, and the \"RAFT - AvgImg\" is also a method with spikes input.", " **Answers to the Questions**\n\n> 1. (a) Are you the first to apply the DSFT / (b) what other encoding methods have been proposed? (c) Can you make the relation with firing rate clearer? (d) Can you show how sensitive this encoding is to noise?\n\nThank you for your insightful questions. As shown above, we divide the original question into four sub-question (a) - (d).\n\n(a) Are you the first to apply the DSFT? \nUsing the interval of spikes to reflect the light intensity of the scene and the accumulation procedure of the spike is straightforward. There are clues in the previous paper on optimization methods to infer the light intensity through the interval of the spike [a]. To the best of our knowledge, we are the first to use the interval of spikes to extract high-dimensional features in deep learning.\n\n(b) What other encoding methods have been proposed? \nThere are different encoding methods for spiking camera. The method in [b] (Section IV - A) accumulates the number of spikes of different time lengths to represent different temporal resolutions. The aggregated spike numbers are concatenated in channel dimension and then processed by 1D convolution. The method in [c] processes spike sub-streams in different time lengths using convolution layers and fusing the features using spatial-adaptive attention.\n\n(c) Can you make the relation with firing rate clearer? \nThe firing rate is a population parameter of the spikes, while the DSFT is a sample statistic. We use the sample statistic DSFT to estimate and represent the population parameter firing rate, where the statistical DSFT and the estimated firing rate have a reciprocal relation. The firing rate is for a population of spikes, while the DSFT is for a single spike, which is more fine-grained. Besides, we don't estimate the firing rate explicitly in the method. The information on the scene and the accumulation procedure of spikes are contained in the DSFT.\n\n(d) Can you show how sensitive this encoding is to noise? \nWe think it's more appropriate to discuss the sensitivity for the whole method than an encoding scheme. Although DSFT can better represent the information in optical flow estimation for spiking camera, there are still fluctuations and randomness in spikes processed by DSFT. The handling schemes for fluctuations are in the following components of the network, which have been introduced in the second paragraph of my answer to weakness 1.\n\n[a] J. Zhao, et al. Reconstructing Clear Image for High-Speed Motion Scene With a Retina-Inspired Spike Camera. TCI 2022. \n[b] X. Xiang, et al. Learning Super-Resolution Reconstruction for High Temporal Resolution Spike Stream. TCSVT 2022. \n[c] J. Zhao, et al. Spk2ImgNet: Learning To Reconstruct Dynamic Scene From Continuous Spike Stream. CVPR 2021. \n\n\n\n> 2. Can you apply your trained network to spiking camera inputs? Does it generalize well to non-RSSF, non-\"simulated\" data?\n\nThank you for your concern. We had applied our trained network to real data when first submitting the paper, and it generalizes well. The detailed locations of the results on real (actual) data have been introduced in my answer to weakness 2.\n\n\n\n> 3. The performance of Spike2Flow is always best. Can the performance of RAFT etc. be mentioned on normal RGB images for the dataset? This puts the performances more in context. For example, does RAFT lose a lot of performance when applying it to AvgImg?\n\nThank you for your comment. We have realized that the original description of the comparable results in the paper may mislead the readers. We have revised the paper to eliminate the misleading. More explanations are in my answer to weakness 3.\n\n\n\n> 4. Is j-V in equation 7 correct?\n\nThanks for your careful concern. It's a typo, and it should be $V-j$. The output of the last iteration has the highest weight in loss when training. We have revised this typo in the paper.\n\n\n\n> 5. The dataset RSSF is mentioned as a contribution, but if I understand correctly, it is an automatic adaptation of SlowFlow, which was made by others. Is this correct? Can it be clarified earlier in the text?\n\nWe select the raw data of SlowFlow dataset as the starting point to simulate the RSSF dataset. There are only image sequences in the raw data of SlowFlow, while we generate flow fields and spikes in RSSF. We indeed use the scene of SlowFlow to generate RSSF, but it's **not an automatic adaptation** to generate the flow fields and spikes. Simulating the spikes according to the scene is non-trivial. We have clarified our contribution on dataset more detailedly in the introduction of our revised paper.", " Thank you for your summary of our contributions and strengths. Your comments are helpful to our paper, and my answers are as follows.\n\n> 1. the algorithm does not really seam terribly innovative. This made me a bit ambivalent in deciding the paper's final ranking, but given the good results, I think it should be of interest to the community that deals with spike and event cameras\n\nThank you for your comments and affirmation that the paper should be of interest to the community of spike and event camera. Our innovations are mainly focused on the challenges of the spiking camera. The key points in this paper to estimate optical flow from spike streams are:\n\n- Extracting stable and efficient features from binary spikes\n- Using the continuousness of the spike streams\n\nThe method for feature extraction is in Section 3.4 of our paper. Firstly, light intensity is crucial in optical flow estimation. However, different \"1\" correspond to various light intensities in spike streams since the \"1\" reflects the accumulation of the photons at a pixel rather than the photons' arrival rate, i.e., the light intensity. Using features directly extracted from binary spikes for matching in correlation volumes may be inappropriate. We design DSFT to transform the spikes into the interval domain to represent the light intensities at each pixel better. Secondly, the light intensities reflected by the spikes are still fluctuating since the arrival of the photons follows the Poisson process. We design the SIA module to aggregate long-range relationships for features of the spikes to reduce the randomness and non-uniformness of the spikes with non-local similarities.\n\nThe method for using continuousness is in Section 3.5 of our paper. We simultaneously construct a series of correlation volumes for a motion procedure. The local correlations looked up according to current flows are encoded to a single hidden state for decoding the motion procedure. The detailed illustration is shown in Fig. 12 of the pdf file in the supplementary material. In this way, we can use the continuousness of the motion to constrain each flow field using all the other motion clues. Besides, jointly using a series of correlations can also reduce the influence of the randomness in the spikes.\n\n\n\n> 2. Eq 1: clarify it there is at most 1 spike per time intervale T=[T_pre, t]. Do all the spikes emerge from all pixels at the same time of is this asynchronous\n\nThank you for your insightful comments. In the current spiking camera model, there is indeed at most 1 spike per time interval, and this is a detail about the working mechanism of the spiking camera. There is a firing threshold $\\theta$ in Eq. 1 to control the firing of spikes. Actually, the $\\theta$ is adjustable in spiking camera, and we adjust the $\\theta$ to ensure there is no more than 1 spike is fired during a reading interval. For each pixel of the camera, the spike can be fired at an arbitrary time, and the firing is asynchronous. The reading out of the spikes for all the pixels is synchronous, and it's at a pretty high rate.\n\n\n\n> 3. How much data is actually produces by this sensor per second? The potential issue I see here is that neuromorphic chips that process event data asynchronously typically have I/O limitations. It is not clear how suitable this algorithm would be for such hardware. Furthermore the use of floating point operations (see softmax in eq 4 for example) may limit the applicability of this algorithm in power efficient hardware. A comment on this would be good.\n\nThank you for your helpful question. Here are my comments.\n\nThe spatial resolution of the current implementation of spiking camera is $250 \\times 400$, and it outputs $40{\\rm k} = 4 \\times 10^4$ binary frames per second. The bandwidth of these data is $250 \\times 400 \\times 4 \\times 10^4 \\times 1 \\ {\\rm bit/s} = 4 \\times 10^9 \\ {\\rm bits/s}$. Thus, the bandwidth of the data output from the camera is $\\frac{1}{8} \\times 4 \\times 10^9 \\ {\\rm Bytes/s} = 5 \\times 10^8 \\ {\\rm Bytes/s} = 476.83 \\ {\\rm MB/s}$\n\nTransmitting the data is realizable for the spiking camera with PCIe Interface. So does the spiking camera in the next generation with $1000 \\times 1000$ spatial resolution, whose bandwidth is around $4.66 \\ {\\rm GB/s}$.\n\nApplying the methods to neuromorphic chips is a popular topic in the community of neuromorphic cameras. Currently, we mainly focus on methods based on traditional artificial neural networks in the float domain to handle the challenges in optical flow estimation for spiking camera. In future research, we will consider studying energy-efficient methods such as methods based on the binary spiking neural networks to apply optical flow for spiking camera in power-efficient hardware.\n\n\n\n> 4. line 209: typo: \"sparsely sampling\"\n\nThank you for your careful observation. We have revised the \"sampling\" to \"sample\" in the paper.", " Thank you for your helpful comments, summary of our paper and affirmation of the performance. The questions and my answers are as follows.\n\n**Answers to the weaknesses**\n\n> 1. The presentation is quite unclear. Esp., as spiking cameras are an emerging device and rarely seen, most people are not familiar with its properties, advantages and specific challenges. The authors did not do well when introducing the background of this task.\n\nThank you for your kind reminder. The properties of spiking camera are important for understanding the mechanism of the camera. However, the space is limited in the paper.\n\n- Properties of spiking camera \n Spiking camera is an emerging kind of sensor that accumulates photons continuously. A spike is fired when the accumulation reaches the threshold at a pixel. The accumulation of photons and the firing of spikes are high-speed and asynchronous at each pixel. As introduced in Section 3.1 of our paper, each pixel of the spiking camera is composed of 3 key components: (a) photon-receptor, (b) integrator, and (c) comparator. The photon-receptor receives photons and converts them to the voltage in the integrator. A spike is fired when the comparator detects that the accumulation in the integrator reaches the firing threshold. At the same time, the accumulation value of the integrator is reset to 0. The details of the 3 components can be found in Fig. 2 of [a].\n\n Although each pixel of the spiking camera is asynchronous, the reading of spikes is synchronous and controlled by the clock circuits. Thus, the output of the spiking camera is a binary sequence whose shape is $H \\times W \\times T$, where $H$ and $W$ are the spatial resolution of the camera and $T$ is the number of reading times. The \"1\" in the output means the accumulator has reached the firing threshold in the reading moment, and vice versa for \"0\". Currently, the reading rate is up to 40 kHz, which is pretty high. The spatial resolution of spiking camera is 250 x 400. There will be spiking camera with high spatial resolution up to 1000 x 1000 soon.\n\n- Advantages of spiking camera\n\n As described in the above paragraph, the spiking camera has a very high temporal resolution, which enables the camera to record high-speed scenes clearly. Besides, the spiking camera can record the continuous changing procedure of the scenes. These two advantages make the camera appropriate for optical flow estimation.\n\n- Specific challenges of spiking camera\n\n (a) Spikes from the spiking camera are in a new kind of data modality. Extracting features from spikes and using the spikes to estimate optical flow are challenges. \n (b) Compared with traditional image sequences, the spike streams are continuous. Using the information in continuous stream data is also a challenge. \n The (a) and (b) above are associated with contribution (1) (line 48-51) and contribution (2) (line 52-53) in our paper respectively. \n\nMore details of the properties of spiking camera can be found in Section 2 of [a] and [b].\n\n[a] J. Zhao, et al. Reconstructing Clear Image for High-Speed Motion Scene With a Retina-Inspired Spike Camera. TCI 2022. \n[b] T. Huang, et al. 1000x Faster Camera and Machine Vision with Ordinary Devices. Engineering 2022.", " **Answers to the weaknesses**\n\n\n> 2. The architecture presented in Fig. 1 seems very similar to RAFT. The Spatial Information Aggregation in Fig.2 seems very similar to CRAFT [a]. It gives me a feeling that the proposed method has rather limited novelty.\n>\n> [a] CRAFT: cross-attentional flow transformers for robust optical flow. CVPR 2022.\n\nThank you for your comments. Our method has some similarities with RAFT since our method is designed based on RAFT architecture. However, the main contributions of our work is about the processing of spike streams. We concentrate on the two key points below to design our network: \n**(A)** How to extract information from binary spike streams \n**(B)** How to use the continuousness of the spike streams \n\nPoint (A) is associated with Section 3.4 of our paper. Firstly, light intensity is significant when estimating optical flow. However, the light intensity of pixels cannot be directly mapped from the \"0\" and \"1\" in the binary spike streams. Different \"1\" correspond to various light intensities since the \"1\" in the spike streams represents the number of accumulated photons rather than the arrival rate of the photons. Thus, It's not appropriate to construct correlation volumes using features extracted directly from binary spikes for matching. We propose DSFT to transform the spikes into the interval domain. The DSFT of spikes can better represent the arrival rate of the photons at each pixel, i.e., the light intensity at each pixel. Secondly, the arrival rate of photons reflected by spikes is still fluctuating after DSFT transformation since the arrival of the photons follows the Poisson process. We design the SIA module to use the similarities in the features to reduce the influence caused by the randomness of the data.\n\nPoint (B) is associated with Section 3.5 of our paper. We simultaneously construct correlation volumes for a series of continuous motions. The local correlations looked up according to current flows are encoded into a single hidden state for jointly decoding the corresponding flow fields (as shown in Fig. 12 of our pdf file in supplementary material). Jointly estimating the continuous motion can constrain each flow with other flow fields and reduce the randomness of the spikes since the movement is continuous.\n\nCRAFT [a] is a newly proposed method in CVPR 2022, and we didn't notice this paper before submitting the paper. As shown in Fig. 2 and Section 3.1 of [a], the Semantic Smoothing Transformer in CRAFT aims at only transforming the second frame in optical flow estimation to have more context information to construct a better correlation volume with Cross-Frame Attention.\n\nWe didn't refer to CRAFT when designing Spike2Flow, and there are differences between CRAFT and our method. We design the SIA module aiming at constructing a general feature extraction scheme for spikes, which is different from CRAFT. The spikes and their interval are non-uniform since the arrival of photons follows the Poisson process. We use the self-attention mechanism to alleviate the non-uniformness of spikes using the non-local similarities. The SIA is applied in the extraction of all the features for correlation and context feature. Self-attention [b] used in SIA is a common module in deep learning architecture. CRAFT only applies SST on the feature of the second frame, and it has a relative positional encoding that we don't have.\n\n[a] X. Sui, et al. CRAFT: cross-attentional flow transformers for robust optical flow. CVPR 2022. \n[b] A. Vaswani, et al. Attention is all you need. NeurIPS 2017.", " **Answers to the Questions**\n\n> 1. Please give illustrations of the output (consecutive frames) of spiking cameras, so that readers will have an intuitive understanding what the videos look like, and what challenges come with this new modality.\n\nThank you for your insightful question. We had included the video illustration of the output of the spiking cameras **in the video of the supplementary material** when first submitting the paper. The video has several examples of comparison of color-coded optical flow. The top-left corner of each example is the dynamic visualization of the binary output of the spiking camera. The 0:10 - 0:26 of the video are about real data captured by spiking cameras. Note that due to downsampling and compression, the visual effect of the binary spikes in the video may seem a little different compared with binary spike frames in original resolution. \n\nWe don't include the binary spikes in the body of the paper since they are more appropriate to be seen dynamically. We hope the supplementary material can help the readers better understand the camera. As mentioned in the first question and shown in the top-left corner of the video, the binary spike is non-uniform. Thus, extracting stable features from the binary spike is an important topic.\n\n\n\n> 2. Why spiking cameras are called with different names in different papers? The two cited references are [11; 19], but in [11] it's called a \"spike camera\" (instead of spiking), and in [19] it's called \"vidar\".\n\nThank you for your careful observation and question. The \"spike camera\", \"spiking camera\" and \"vidar\" refer the same camera model that is studied in this paper. The \"spike\" in \"spike camera\" is a noun, and it means the data modality of the camera is \"spike\". The \"spiking\" in \"spiking camera\" is a gerund of the verb \"spike\", and it means the camera represents data by \"spiking\". Thus, the meaning of these two words are similar.\n\nThere are similar cases that the same concept has non-uniform names in emerging areas. For example, \"spiking neural networks\" [a] is called \"spike neural networks\" in [b].\n\n[a] W. Fang, et al. Deep Residual Learning in Spiking Neural Networks. NeurIPS 2021. \n[b] L. Zhang, et al. TDSNN: From Deep Neural Networks to Deep Spike Neural Networks with Temporal-Coding. AAAI 2019.", " The authors have proposed a tailored network that extracts information from binary spikes with ST representation based on the differential of spike firing time and spatial information aggregation. In addition, their proposed network utilizes continuous motion clues through joint correlation decoding. All in all, this is a very well-structured paper. An extensive literature review has been performed to support the research objectives of the authors. I especially like the detailed outlining of the architecture of the network. \nThe comparative results on real scenes with spike and flow as well as photo-realistic high-speed motion are adequately represented, and this demonstrates one of the strengths of this work.\n From a technical point of view, there isn't any serious concern regarding the content of this work. I would suggest the authors expand the joint decoding of the correlation section a bit more since it is an important part, but the current status of that section is not very readable to the reviewer. For example, I couldn't get the reasoning behind assigning 0.8 to the decay factor here. Compared to previous works, the limitations and impact of the work are adequately outlined and addressed.", " The authors propose Spike2Flow, a deep neural network that maps spikes from a spiking camera to optic flow estimates. The method encodes spikes with a differential of spike firing time (DSFT) transform. It achieves results competitive with (better than) the state of the art on a new dataset, real scenes with spikes and flow (RSSF). Strengths:\n* The network focuses on a new type of spiking camera, potentially allowing for very high-speed optic flow estimation.\n* The encoding scheme may be useful for other work using spiking cameras.\n* The performance of the algorithm seems competitive.\n\nWeaknesses:\n* The encoding scheme seems quite straightforward. A spiking camera accumulates illuminance, which means that a bright pixel will have a higher firing rate. The proposed DSFT basically makes a local estimate of this rate, encoding the time between two subsequent spikes. This seems quite a naive method that may be sensitive to noise. Nothing is mentioned about this. \n* As far as I understand, the network is not applied to actual spiking camera inputs, but to a simulation of such a camera based on a normal (or high-speed) camera stream. The paper would be much strengthened by using actual spiking camera inputs. \n* RAFT and other sota methods are only applied on the modified image inputs (AvgImg, Spike). The performance on normal images from the slowflow data set are not mentioned. 1. Are you the first to apply the DSFT / what other encoding methods have been proposed? Can you make the relation with firing rate clearer? Can you show how sensitive this encoding is to noise?\n2. Can you apply your trained network to spiking camera inputs? Does it generalize well to non-RSSF, non-\"simulated\" data?\n3. The performance of Spike2Flow is always best. Can the performance of RAFT etc. be mentioned on normal RGB images for the dataset? This puts the performances more in context. For example, does RAFT lose a lot of performance when applying it to AvgImg? \n4. Is j-V in equation 7 correct?\n5. The dataset RSSF is mentioned as a contribution, but if I understand correctly, it is an automatic adaptation of SlowFlow, which was made by others. Is this correct? Can it be clarified earlier in the text? N/A", " The authors present an algorithm for optical flow from spiking camera data. The main contributions are:\n\n1) the introduction of a spike based optical flow network, Spike2Flow which extracts features from binary spikes and estimates flow fields\n\n2) a dataset for spike-based optical flow\n\n3) Experimental results demonstrating good comparative results on various datasets and compared to other algorithms Strengths:\n\nS1: the paper is well written overall\n\nS2: the paper deals with an interesting problem, namely using spike cameras for optical flow. Given that spike cameras have high temporal resolution, optical flow is a good application for these cameras\n\nS3: the authors demonstrate good results compared to other similar algorithms\n\nS4: the authors indicate that source code will be released upon publication\n\nWeaknesses:\n\nW1: the algorithm does not really seam terribly innovative. This made me a bit ambivalent in deciding the paper's final ranking, but given the good results, I think it should be of interest to the community that deals with spike and event cameras\n\nW2: Eq 1: clarify it there is at most 1 spike per time intervale T=[T_pre, t]. Do all the spikes emerge from all pixels at the same time of is this asynchronous\n\nW3: How much data is actually produces by this sensor per second? The potential issue I see here is that neuromorphic chips that process event data asynchronously typically have I/O limitations. It is not clear how suitable this algorithm would be for such hardware. Furthermore the use of floating point operations (see softmax in eq 4 for example) may limit the applicability of this algorithm in power efficient hardware. A comment on this would be good.\n\nW4: line 209: typo: \"sparsely sampling\" See my comments above N/A: not applicable", " This paper proposes a method, Spike2Flow, to estimate optical flow from videos recorded by spiking cameras. The input modality is rarely used and not properly introduced, so it hard for me to evaluate the merits of this work. Moreover, the technical contribution seems marginal. Strengths:\n1. The empirical performance seems good.\n\nWeaknesses:\n1. The presentation is quite unclear. Esp., as spiking cameras are an emerging device and rarely seen, most people are not familiar with its properties, advantages and specific challenges. The authors did not do well when introducing the background of this task.\n2. The architecture presented in Fig. 1 seems very similar to RAFT. The Spatial Information Aggregation in Fig.2 seems very similar to CRAFT [a]. It gives me a feeling that the proposed method has rather limited novelty.\n\n[a] CRAFT: cross-attentional flow transformers for robust optical flow. CVPR 2022.\n 1. Please give illustrations of the output (consecutive frames) of spiking cameras, so that readers will have an intuitive understanding what the videos look like, and what challenges come with this new modality.\n2. Why spiking cameras are called with different names in different papers? The two cited references are [11; 19], but in [11] it's called a \"spike camera\" (instead of spiking), and in [19] it's called \"vidar\".\n N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "lphZOpi2_Y", "MDafw_TfeLg", "vUvsI0ugtPV", "_C7XeuntLTp", "lphZOpi2_Y", "lphZOpi2_Y", "flzzpgQU92Z", "GoxQM9UD0Tz", "vUvsI0ugtPV", "2WzeIT3cQxg", "_C7XeuntLTp", "DE16BQcVrQo", "DE16BQcVrQo", "DE16BQcVrQo", "GoxQM9UD0Tz", "HDlrCXrYgb6", "HDlrCXrYgb6", "HDlrCXrYgb6", "nips_2022_3vYkhJIty7E", "nips_2022_3vYkhJIty7E", "nips_2022_3vYkhJIty7E", "nips_2022_3vYkhJIty7E" ]
nips_2022_7KBzV5IL7W
INRAS: Implicit Neural Representation for Audio Scenes
The spatial acoustic information of a scene, i.e., how sounds emitted from a particular location in the scene are perceived in another location, is key for immersive scene modeling. Robust representation of scene's acoustics can be formulated through a continuous field formulation along with impulse responses varied by emitter-listener locations. The impulse responses are then used to render sounds perceived by the listener. While such representation is advantageous, parameterization of impulse responses for generic scenes presents itself as a challenge. Indeed, traditional pre-computation methods have only implemented parameterization at discrete probe points and require large storage, while other existing methods such as geometry-based sound simulations still suffer from inability to simulate all wave-based sound effects. In this work, we introduce a novel neural network for light-weight Implicit Neural Representation for Audio Scenes (INRAS), which can render a high fidelity time-domain impulse responses at any arbitrary emitter-listener positions by learning a continuous implicit function. INRAS disentangles scene’s geometry features with three modules to generate independent features for the emitter, the geometry of the scene, and the listener respectively. These lead to an efficient reuse of scene-dependent features and support effective multi-condition training for multiple scenes. Our experimental results show that INRAS outperforms existing approaches for representation and rendering of sounds for varying emitter-listener locations in all aspects, including the impulse response quality, inference speed, and storage requirements.
Accept
This is a technically good paper, with some flaws. Parts of the paper are hard to read. Several questions remain, e.g. how to determine the optimal number and location of bouncepoints, and how they depend on room layout and content. The motivation behind some of the comparisons, e.g. to AACs is unclear. Regardless, the overall paper presents a well-defined novel idea, and represent a significant contribution. The authors have also addressed most of the reviewers' comments satisfactorily. I am recommending that the paper be accepted.
val
[ "-42w23NjxwK", "4Q4NexdYhST", "VioZK76GhfQ", "FJnxoLuE70v", "VOTP1wSYq7-", "pgWyXofd5jL", "jmmrNTnCI6-", "i2A5mDe4KCl" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for the response, which addresses many of my concerns. The new comparions are very helpful. It makes sense to use a compact neural representation to encode the acoustic fields, which can be computationally expensive to simulate. However, I am still not very convinced that the dataset used in the paper is the best testbed here, because the main argument is that the neural network can render very *realistic and accurate* sound in real-time. And just looking at the qualitative results shown on the loudness map is not very convincing to show that the model works well on the points where there are no ground-truth. Having said that, the paper proposes some interesting ideas, and the proposed modules make sense and have inductive biases built into the network. I am raising my score and would be fine to see this paper accepted.", " We thank the reviewer for a thoughtful review and valuable feedback. Below we answer questions, and as a result the revisions that we will include in the camera ready version of the manuscript which address the reviewer's concerns.\n### Strengths And Weaknesses\n- W1. We summarize the benefits of using an implicit neural representation: \n - In comparison to previous pre-computation approaches, an implicit neural representation can learn a continuous compact representation for pre-computed impulse responses in a scene, which significantly decreases the storage requirement and provides fast inference in run-time.\n - Indeed some state-of-the-art geometry-based sound simulations can be made real-time. However, these approaches still have the common drawback of geometric-based methods, such as inaccuracy at low frequency and inability to simulate complete wave-based sound effects. In comparison, an implicit neural representation is not limited to these drawbacks at run-time because conceptually it can learn to map input coordinates to infinite-high fidelity impulse responses which can be pre-computed ahead of time. In other words, the complexity of the sound simulation will not be a bottleneck for the quality of the real-time rendering. Therefore, we believe that using an implicit neural representation is a promising direction for future sound rendering techniques.\n - Implicit neural representations can fit real world data, on which it is challenging for an interactive geometric-based simulation to reach the same quality. To further investigate this aspect, we train INRAS on S1-M3969 subset of meshRIR dataset [1] which includes dense real impulse responses recordings in a room with approximate dimensions of 7m x 6.4m. We sample the bounce point every 0.5m. Since the recording is a single channel, we remove the head orientation conditions at the listener module and output a single channel. We use 90% data for training and 10% data for testing. \\\nTo perform the comparison, we created a room with the same dimension using available open source python package (pyroomacoustics) [2] to simulate impulse responses using Image Source + Ray tracing method. For a fair comparison, we faithfully tuned material parameters of the room model to best match the ground truth recordings. Our results show that INRAS outperforms geometry-based simulation in all metrics and obtains large gains on them as shown in the following table:\n| **Real World Scene** | C50 error | T60 error | EDT error |\n|---------------------------|----------------|----------------|-----------------|\n| Geometry Based Simulation | 1.45 | 3.67 | 0.042 |\n| INRAS (ours) | 0.35 (-75.86%) | 1.61 (-56.13%) | 0.011 (-73.81%) |\n\nWe intend to include this discussion and the experiment in the revised version of the paper.\n### Questions\n- Q1. For points that do not have ground truth, we refer to the qualitative results shown on the loudness map and impulse response waveforms in Fig 4, 5 and supplementary material, and demo videos. The loudness map shows that INRAS can render a continuous acoustic field at any location and resemble the ground truth (smooth version). The impulse response waveform also shows a high similarity to the ground truth.\n- Q2. See the response to W1.\n- Q3. The number of bounce points is dependent on the room size and the sampling rate. We performed experiments with various sampling rates to evaluate which selection to use. For example, ‘Room_2’ has a perimeter of about 24 meters and the total number points of the mesh boundary is ~ 2400. Our experiments indicated that uniform sampling of the bounce points every 0.5m (sample rate:1/50, 48 bounce points) is optimal. Notably, the impulse response data has the same spatial resolution. \nUsing a higher sampling rate doesn’t achieve better performance because noise and redundant features are induced as the sampling resolution is higher than the spatial resolution. We compare different sampling rates for bounce points for the mesh of ‘Room_2’ in the following table, which will be added to the supplementary information along with the above discussion.\n| Sampling Rate | T60 error | C50 error | EDT error |\n|---------------|-----------|-----------|-----------|\n| 1/100 | 1.47 | 0.53 | 0.019 |\n| 1/75 | 1.42 | 0.53 | 0.018 |\n| **1/50** | **1.41** | **0.51** | **0.016** |\n| 1/25 | 1.41 | 0.59 | 0.019 |\n| 1/10 | 1.70 | 0.71 | 0.025 |\n\n[1] S. Koyama, et al, \"MESHRIR: A Dataset of Room Impulse Responses on Meshed Grid Points for Evaluating Sound Field Analysis and Synthesis Methods,\" 2021 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2021.\n\n[2] R. Scheibler, et al, Pyroomacoustics: A Python package for audio room simulations and array processing algorithms, Proc. IEEE ICASSP, Calgary, CA, 2018.", " We thank the reviewer for a thoughtful review and valuable feedback. We address the questions and specify the intended revisions related to the presentation of our work below.\n### Strengths And Weaknesses\n- W1. We will follow the reviewer’s recommendations to clarify the presentation of the methodology in the Methods section. We will also extensively proofread the manuscript for overall grammatical and typographical amendments.\n- W2. Indeed, the main goal of our work is to introduce a light-weight neural network to learn a continuous implicit function to represent the acoustic fields of scenes. For that purpose, we propose a novel approach to compress acoustic fields in terms of required memory and rapid rendering of impulse responses for interactive applications.\n- W3. We compare INRAS to audio coding standards since INRAS is effectively a novel encoding method to compress the acoustic field of a scene. We thereby estimate the effectiveness of INRAS versus common audio coding standards. The comparison shows that the storage requirement of Audio coding standards is much larger than of INRAS (180-350MB vs. 3MB). Moreover, an interpolation approach is necessary for audio coding standards to generate impulse responses at unseen locations. We examined how interpolation of the training set encoded by Audio coding standards (AAC, Opus) impacts the quality of the generated impulse responses for the test set and we find that the quality drops significantly. These experiments demonstrate that INRAS should be more beneficial as an encoding method for acoustic fields of scenes than Audio coding standards.\n- W4. We will revise the abstract to more effectively emphasize the following key points and contributions of our work:\n - We introduce a light-weight neural network to learn a continuous implicit function to represent the acoustic fields of scenes.\n - INRAS disentangles scene’s geometry features.\n - INRAS includes three modules to generate independent features for the emitter, the geometry of the scene, and the listener. This leads to efficient reuse of scene-dependent features for arbitrary emitter-listener positions.\n - INRAS supports effective multi-condition training for multiple scenes by adding only a few trainable parameters. \n - INRAS outperforms existing approaches on all metrics of audio rendering, including the impulse response quality, inference speed, and storage requirements.\n- W5. We thank the reviewer for pointing out that the term ‘generalization’ used to describe the option of using a single unified network to represent multiple scenes is too broad. We agree that a more suitable term could be used to describe this option. When referring to this option in the camera-ready revision of the manuscript we will use the term ‘multi-condition training’ and we will also clarify the exact meaning of this term. In addition to multi-condition training, we also consider additional aspects of INRAS extensions for unseen scenarios, e.g. unseen reflection coefficients. We describe these in experiments below in response to Q1. We will include these in the revised version of supplementary information.\n", " ### Questions\n- Q1. Fundamentally, INRAS implicit neural representation is the parameterization of a continuous acoustic field using neural network parameters. The acoustic field is encoded within the neural network, providing a more compact representation. Since the Soundspace dataset fixed the reflection coefficient before simulating sounds, we didn’t consider such variations. \\\nTo answer this question, we tested whether INRAS can be conditioned on various reflection coefficients by creating a 3m x 5m room using the available open source python package (pyroomacoustics) [1] to simulate impulse responses using Image Source + Ray tracing method. We use spatial resolution of 0.5m to sample both impulse responses and bounce points. For the same room, we created 225 sets of impulse responses with various absorption and scattering coefficients on the walls. We use 90% sets of coefficients to train and 10% to test. For the coefficients seen in the training, we also hold 10% spatial points for testing. To incorporate the coefficient condition in our model, we adapt our model by adding a coefficient embedding using sinusoidal embedding similar to coordinate inputs. An addition operation is performed to add the coefficient embedding to the decomposed features. In this experiment, as the following table, INRAS model achieved testing t60 error of 3.83%, C50 error of 1.13dB and EDT error of 0.03 sec, which is quite close to the performance on the training set. These results show that INRAS can perform well even when absorption and scattering coefficients are unseen and are varying.\n| Settings | T60 error | C50 error | EDT error |\n|---------------------------------------------------|-----------|-----------|-----------|\n| Seen Varying Reflection Coefficients (Training) | 3.56 | 1.06 | 0.023 |\n| Unseen Varying Reflection Coefficients (Testing) | 3.83 | 1.13 | 0.030 |\n- Q2. A complete generalization to an unseen environment without any condition is challenging for any type of implicit neural representation including INRAS. Here, we focused on modeling a compact representation for known scenes. However, as discussed above, we do assess several aspects of generalization of the pre-trained INRAS model such as variation of physical properties of the scene, slight geometrical variations, etc. \\\nIf the scene is completely unseen and significantly different than those that INRAS was pre-trained on, the same quality of the generated impulse responses as for the seen scenes cannot be guaranteed. \\\nIndeed, when we tested such an unseen scenario, the evaluation showed a large T60 error between the generated IR and the ground truth. We will add this discussion to the limitation section.\n- Q3. See response to W3.\n- Q4. The number of bounce points is dependent on the room size and the sampling rate. We performed experiments with various sampling rates to evaluate which selection to use. For example, ‘Room_2’ has a perimeter of about 24 meters and the total number points of the mesh boundary is ~ 2400. Our experiments indicated that uniform sampling of the bounce points every 0.5m (sample rate:1/50, 48 bounce points) is optimal. Notably, the impulse response data has the same spatial resolution. \nUsing a higher sampling rate doesn’t achieve better performance because noise and redundant features are induced as the sampling resolution is higher than the spatial resolution. We compare different sampling rates for bounce points for the mesh of ‘Room_2’ in the following table, which will be added to the supplementary information along with the above discussion.\n| Sampling Rate | T60 error | C50 error | EDT error |\n|---------------|-----------|-----------|-----------|\n| 1/100 | 1.47 | 0.53 | 0.019 |\n| 1/75 | 1.42 | 0.53 | 0.018 |\n| **1/50** | **1.41** | **0.51** | **0.016** |\n| 1/25 | 1.41 | 0.59 | 0.019 |\n| 1/10 | 1.70 | 0.71 | 0.025 |\n\n- Q5. In line 226, the alpha_k should be alpha_{i,k}.\n- Q6. In Fig 3, d_{b_{i}}^s is the difference between position vectors.\n\n[1] R. Scheibler, E. Bezzam, I. Dokmanić, Pyroomacoustics: A Python package for audio room simulations and array processing algorithms, Proc. IEEE ICASSP, Calgary, CA, 2018.", " We thank the reviewer for a thoughtful review, valuable feedback and acknowledgement of our work. We provide point by point clarifications and answer questions below.\n### Strengths and Weaknesses:\n- W1. Our approach is indeed dependent on the availability of training data. This is not an issue for virtual scenes since sound simulation can be performed to generate data prior to training. For real world scenes, there typically would be less data available since it cannot be easily generated. Plausible solutions that are currently being used are physical scanning of real world scenes and obtaining 3D meshes and then performing sound simulations to generate training data. The scenes in the Soundspace dataset are such real world scenes.\n### Questions:\n- Q1. Bounce points: The number of bounce points is dependent on the room size and the sampling rate. We performed experiments with various sampling rates to evaluate which selection to use. For example, ‘Room_2’ has a perimeter of about 24 meters and the total number points of the mesh boundary is ~ 2400. Our experiments indicated that uniform sampling of the bounce points every 0.5m (sample rate:1/50, 48 bounce points) is optimal. Notably, the impulse response data has the same spatial resolution. \nUsing a higher sampling rate doesn’t achieve better performance because noise and redundant features are induced as the sampling resolution is higher than the spatial resolution. We compare different sampling rates for bounce points for the mesh of ‘Room_2’ in the following table, which will be added to the supplementary information along with the above discussion.\n| Sampling Rate | T60 error | C50 error | EDT error |\n|---------------|-----------|-----------|-----------|\n| 1/100 | 1.47 | 0.53 | 0.019 |\n| 1/75 | 1.42 | 0.53 | 0.018 |\n| **1/50** | **1.41** | **0.51** | **0.016** |\n| 1/25 | 1.41 | 0.59 | 0.019 |\n| 1/10 | 1.70 | 0.71 | 0.025 |\n- Q2. Modeling rooms of different sizes beyond the layout covered in the *Soundspace* dataset is straightforward. The only setting that is required is the distribution of the bounce points depending on the room geometry and size and uniform sampling can be used as we used for *Soundspace* rooms.\n- Q3. We uniformly sample bounce points on the boundaries of obstacles appearing in the scene and there could be any number of obstacles. In the *Soundspace* dataset, the only obstacles are walls that block sound propagation but INRAS could also deal with additional sound blocking objects such as furniture. The distribution of bounce points on the obstacles is according to uniform sampling rate as discussed above.\n", " Sound in a 3D scene depends on the emitter position and the listener position. The spatial acoustic information of the scene is important for having an immersive experience during scene modeling wherein how sounds from a particular location in a scene are perceived in another location in the scene. The authors designed INRAS: implicit neural representation for audio scenes as a way to model the spatial information. Given a geometry of the scene, emitter and listener positions, INRAS learns the Binaural impulse response which is combined with the source sound to render the spatial sound. The INRAS system has two main components where the first component learns the audio scenes feature decomposition through the scatter, bounce, and gather module. Here the bounce point is any point in the surface of the geometry environment from which the sound can bounce from the emitter to the listener. The second component is the listener module that combines the decomposed feature representation from scenes by fusing the three components via concatenation of the features associated with each bounce point b to predict the Spatial Binaural impulse responses taking into account the head orientation and the location of the left/right ear for the listener. \n\nThe disentangling of the feature representation in the first stage into scatter, bounce, and gather modules allows the model to generalize to multiple scenes with few trainable parameters. The proposed INRAS method outperformed the existing approaches on all the metrics of audio rendering and has greatly improved the inference, speed, and storage requirements since the model require less than 3 Mb of storage capacity with almost a 4-fold improvement in inference speed. Additionally, the authors provide evidence on why the three stages are necessary such as the removal of the bounce module eliminates the static scene feature impairing performance. The paper provides a good description of the related works and clarifies how the INRAS method is a light-weighted and efficient neural network module that can generate high fidelity spatial impulse response at the arbitrary emitter and listener positions. \n\nStrengths:\nThe model is a lightweight and efficient neural network that produces spatial impulse responses at arbitrary emitter-listener positions. INRAS is capable of modeling continuous implicit function which maps the corresponding time-domain directional and binaural impulse responses of the sound field. \n\nThe training step leverages the fact that the geometry of the scene is static and thus it is possible to learn reusable scene-dependent feature representations which can be associated with the emitter and the listener. The design of the two-stage approach to first generate independent scene geometric features and then during the second stage fuse the features to generate a binaural impulse response is a novel and logical step in building such an acoustic representation. \n\nThe approach generalizes to multiple scenes as the emitter and listener is made aware of scene geometry via the computation of the relative distance to bounce points in the scatter and gather modules. Here the bounce module will provide a static scene-dependent feature representation. The co-ordinate space of multiple scenes is normalized and the bounce points are collected from the different scenes to achieve generalization as the emitter and listener can realize the scene they are a part of. \n\nEvaluation metrics cover the three aspects such as impulse response quality, storage requirements, and inference speed and on all three metrics the approach outperforms the NAF method. The model has a significantly lower C50 error and has a third of the parameters with less than 3 Mb required to store the model and a 4 fold improvement in inference speed due to the smaller size. Additionally, the method outperforms all other relevant approaches when it comes to generalizing to multiple scenes leading to significant improvement in terms of SNR and PSNR for INRAS system. \n\nThe paper is well written and organized with no/few grammatical errors and the supplementary section provides necessary information complementing the main paper. \n\nWeakness: \nThe approach is dependent on the data and availability of the training data for any given number of scenes and thus limited in terms of scaling the system to any scenario. \n\nThe choice of the bounce points is important as the authors show that when bounce points are chosen with missing some of the boundaries the performance reduces for the INRAS representation as the T60 error and the C50 error increase. This raises the question on what would happen to model performance in a situation where the complete boundaries of the 3D scene are not defined. \n Authors have performed experimentation with INRAS where the optimal number of bounce points mentioned in the paper is 40 to 60 and it is unclear how the authors arrived at this conclusion. Why would the performance of the system drop with the addition of the bounce points? Wouldn’t a system with 100 bounce points extracted from the mesh boundary that covers the geometry of the environment be better? \n\nHow would the modeling be changed/affected when the listener is placed in rooms of different sizes beyond the layouts that are covered in the Soundscapes dataset? How does the current approach take into account the number of obstacles in the room is unclear? The authors provide a limitation for their work where the boundary of the scene should be given as input to the INRAS system which I agree is a primary limitation. The above limitation is not a problem for scenes with 3D models where the boundary can be inferred if not given but the scenes without any geometry and recorded impulse response. The INRAS method would not work in these situations since the bounce points will be impossible to define in situations with unknown geometry. The method is dependent on the availability of datasets such as SoundScapes which imply the need for INRAS to have a sufficient amount of training data to learn a reasonable acoustic representation of a given scene. This type of data collection is a problem in the case of real-world situations where a large number of impulse responses will be required for INRAS to work. \n\nWith regard to the negative societal impact, the authors highlight a concern where rendered spatial audio can be used to manipulate the original nonspatial sound and create a non-authentic impression of the audio requiring the need for an additional authentication step on the output of INRAS preventing the unethical or illegal use. \n\nAnother concern that I think authors need to consider is whether it is possible to deduce the environment in which the listener is seated where the malicious user may only require the need of spatial sound - the output of INRAS. ", " In this paper the authors address the problem of generating high fidelity impulse responses for acoustic scene rendering for AR/VR and teleconferencing type of applications which rely on faithful recreation of acoustic scenes. The basic idea revolves around decomposing RIR generation into two sets of attributes: i) attributes pertaining to source and receiver locations and ii) attributes pertaining to the room geometry. Features are separately extracted for these attributes using a feed-forward net. Extracted features are then concatenated to estimate binaural impulse responses using another feed-forward net. All the FF nets are trained end-to-end using waveform reconstruction loss and a multi-resolution spectro-temporal reconstruction loss which accounts for reconstruction of magnitude and phase spectrograms separately. The proposed method is compared to audio coding standards and relatively new deep learning based approach for RIR generation (Neural Acoustic Fields - NAF). Experimental results show that the proposed decomposition of attributes and network training is effective in generating high quality impulse responses, reduce storage requirements, fast and can be used to generate IRs in multiple rooms of different shapes. Strengths:\n1. The idea of decomposing RIR attributes separately for room and source/receiver locations is very interesting. The idea of using Bounce points and the three modules to capture interaction of emitter with the room (bounce points), propagation from bounce points, and receiver with bounce points is also very interesting. \n\nWeakness:\n1. The paper is not very easy to read. I believe the presentation can be greatly simplified. \n2. Goal of the paper is not crisply stated and is scattered throughout the paper. \n3. Purpose of comparing with Audio coding standards is completely lost on the reader. \n4. Abstract is not very informative. \n5. Generalization experiments are a misnomer. Those experiments are mostly multi-condition training than generalization. 1. At a fundamental level, from the input-output specifications of the network it appears that the function that the network is trying to approximate is a one-to-many function as for a specified input - of emitter+receiver locations and bounce points, acoustic conditions can be changed by changing the reflection co-efficients of the walls. So are these networks trying to memorize a specific enclosure ? In this case it is not generalizable to other rooms with exact same geometry but with different reflection characteristics ?\n2. Would it be possible to assess the generalization capabilities of the network in the true sense by generating RIRs of unseen environments ?\n3. Was the point of comparing the proposed approach with AAC and Opus to say merely storing discrete IRs and interpolating will not help ? \n4. Generically stating 40-60 bounce points would suffice is not making sense to me. Is this not dependent on room geometry in general and size in particular ?\n5. in line 226, shouldn't \\alpha_k bea function of i ?\n6. In Fig 3, d_{b_{i}}^s is this a vector of distances or difference in position vectors. Dimesnions of these inputs should be clarified. Similarly for the gather modules. A few limitations are raised in Questions section. Based on authors' response I will edit this section at a later stage. ", " This paper proposes to use implicit neural representation to represent spatial audio in the scene. It renders high fidelity time-domain impulse responses at any arbitrary emitter-listener positions using neural network parameterization. Experimental results demonstrate that INRAS outperforms existing methods across a series of metrics. Strengths:\n\n- The paper is generally well-written and well-motivated. It nicely reviews prior methods of scene acoustic modeling: wave-based vs. geometry-based. The proposed method is motivated by prior methods on interactive acoustic radiance transfer, and proposes a neural network-based approach to mimic their settings.\n\n- Dividing the whole pipeline into three sub-steps: scattering from the emitter, acoustic transfer, and gathering at listener makes sense. The design of INRAS well captures the physical process of how sound is propagated from the emitter to the listener.\n\n- Compared to the baseline method NAF, the proposed method has large gains.\n\nWeaknesses:\n\n- The proposed method is motivated by prior methods on interactive acoustic radiance transfer, which is nice. And it is cool to represent spatial audio using a neural representation. However, what is the use of it and why a neural implicit representation is better is not discussed. The neural network basically tries to model how sound is propagated from the emitter to the listener. Why not just directly doing geometry-based audio simulation, for example bi-directional path tracing, to obtain the room impulse response. What is the motivation to use an implicit neural representation to replace the spatial audio simulation? Bi-directional path tracing can also be made real-time with some simplifications.\n\n 1. It is claimed \"the proposed method renders high fidelity time-domain impulse responses at any arbitrary emitter-listener positions using neural network parameterization\", but SoundSpaces only provides impulse responses at a discrete set of grid points in the spatial environment. How the evaluation is done at the points where there is no ground truth?\n\n2. See weakness.\n\n3. How sensitive is the performance to the number of bounce points used? Yes, there are discussions of limitations and potential negative societal impact of the work in the supplementary materials." ]
[ -1, -1, -1, -1, -1, 8, 5, 5 ]
[ -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "4Q4NexdYhST", "i2A5mDe4KCl", "jmmrNTnCI6-", "jmmrNTnCI6-", "pgWyXofd5jL", "nips_2022_7KBzV5IL7W", "nips_2022_7KBzV5IL7W", "nips_2022_7KBzV5IL7W" ]
nips_2022_Zzi8Od19DSU
Posterior and Computational Uncertainty in Gaussian Processes
Gaussian processes scale prohibitively with the size of the dataset. In response, many approximation methods have been developed, which inevitably introduce approximation error. This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior. Therefore in practice, GP models are often as much about the approximation method as they are about the data. Here, we develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended. The most common GP approximations map to an instance in this class, such as methods based on the Cholesky factorization, conjugate gradients, and inducing points. For any method in this class, we prove (i) convergence of its posterior mean in the associated RKHS, (ii) decomposability of its combined posterior covariance into mathematical and computational covariances, and (iii) that the combined variance is a tight worst-case bound for the squared error between the method's posterior mean and the latent function. Finally, we empirically demonstrate the consequences of ignoring computational uncertainty and show how implicitly modeling it improves generalization performance on benchmark datasets.
Accept
Gaussian Processes are a very nice modelisation tool in Bayesian nonparametrics, with very nice uncertainty quantification. But they also lead to serious computational issues. So, in practice, it is difficult to know what part of the uncertainty is due to the data, and what is due to approximations in the computations. Here, the authors propose a new (and cheap) iterative approximation, IterGP. They analyse carefully its approximation error in Section 3. Experimental results corroborate this analysis. Overall, IterGP can reach the same accuracy than previous methods with a limited number of steps, and thus a smaller computational burden. Increasing the number of steps will of course make it even more accurate. The four reviewers agreed on the relevance of the algorithm, the quality of its technical analysis and the quality of the experimental results (I agree with them). Reviewer 5Mot praised the high quality of the writing. The other reviewers overall agreed, but provided a list of minor points that could be fixed to improve the paper. I therefore recommend to accept this paper.
train
[ "RGkMla_GnZa", "H_FKlhmIrEH", "TNEfXFm_JCs", "xFnTlU5_vvT", "ds3q3_ikncf", "4KvCzVA5ba_", "sGbrvQ4EJ6w", "GgBki4KHVIr", "PlPom84hajb", "ElqZ8lzBl8W", "5aflQp4nTU", "K3tbPQFcg6", "rXpXrUxHKFm" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed explanations. I will keep my score as it is. ", " Thank you for your response and feedback! We will make sure to include the details on the stopping criteria used and a clarification about the grey lines in Algorithm 1 in the final version.", " Thank you for addressing my comments and questions - things are much clearer now. In particular, thank you for elaborating on the SVGP overconfidence. \n\nI suggest you include the details on your stopping criterion in a revised paper (adding it to the supplementary should be just fine). It's both interesting and relevant to future users of IterGP. \n\nI also think it would improve your paper if you added the short clarification of the grey lines in algorithm 1 to the main paper. It is great that you included the lines and marked them, but it was just a bit confusing to me.\n\nI still think this is excellent work and will keep my score.", " Thank you for your prompt response. We will make sure to include your suggested discussion points in the final version of the paper and will elaborate on their subleties.", " I thank the authors for addressing my concerns. I am happy with the explanation that Theorem 2 is applicable to kernel approximations that satisfy S42. I would like to see the authors update the main text to include such subtleties. I have updated my score.", " \nThank you for your positive feedback! As requested we will expand the background section in the final version. We hope we could answer any questions you raised below. If anything should remain unclear, we would be glad to address it in the discussion period.\n\n## Detailed Response\n\n### Questions\n> 1. I don't fully understand the definition of the residual in line 90. Specifically, why does it contain $\\hat K$ and not just $K$. I would have expected the latter if it's the residual between the observations and the GP posterior mean.\n\nThe term \"residual\" is unfortunately overloaded in this context. The residual $r_i = y - \\hat{K} v_i$ as referred to in the paper is the *residual of the approximate solution $v_i$ to the linear system* $\\hat{K} v_* = y$, which defines the representer weights. This residual goes to zero as IterGP runs, no matter the observational noise $\\sigma^2$. In contrast the *residual of the GP prediction* $y - \\mu_i(X) = y - Kv_i$ does not. \n\n> 2. I found the discussion of the overconfidence in SVGP extremely interesting, but I don't quite understand the point made in lines 142-146. Since the representer weights of the inducing points are typically calculated exactly, where does the error in the uncertainty estimate (i.e., the overconfidence) come from?\n\nIn the paper we view the representer weights as unknown initially and, as we expend more computation, uncertainty about them contracts. The current estimate for the representer weights is given by $v_i = C_i (y-\\mu) = S_i(S_i^\\top \\hat{K} S_i)^{-1}S_i^\\top (y-\\mu)$ (see eqn (4) and line 95). This is a Bayesian update on the representer weights where $S_i^\\top \\hat{K} S_i = S_i^\\top \\hat{K} \\Sigma_0 \\hat{K} S_i$ is the Gram matrix. Its interpretation is how surprising the projections of the representer weights $S_i^\\top (y-\\mu) = S_i^\\top \\hat{K}v_*$ should be to us, given our prior uncertainty $\\Sigma_0$ about the representer weights. Choosing $S_i=k(X, Z)$ gives the form of IterGP-PI's posterior mean in eqn (8). Now, SVGP uses a similar form for the posterior mean $\\mu(\\cdot)$ (see eqn (7)), **but** the Gram matrix is \"smaller\", i.e. $q(X, X) \\preceq k(X, X)$ and therefore $K_{ZX}(q(X, X) + \\sigma^2I)K_{XZ} \\preceq K_{ZX}(k(X, X) + \\sigma^2I)K_{XZ} = S_i^\\top \\hat{K} S_i$. Therefore SVGP in its update of the representer weights is *not uncertain enough* as the Bayesian update would require, leading to its overconfidence in its posterior mean estimate.\n\n> 3. I suppose the choice of a stopping criterion will be problem-specific, but do you have some suggestions for this?\n\nIn our implementation we use\n- *the absolute and relative norm of the residual*, i.e. $\\lVert r_i \\rVert_2 < \\max(\\delta_{\\mathrm{reltol}} \\lVert y \\rVert_2, \\delta_{\\mathrm{abstol}})$, which terminates if a sufficient accuracy is reached on the training data, and\n- *a maximum number of iterations*, which defines the computational budget. \n\nAs you point out one could specify others. For example, from a probabilistic numerics viewpoint one may want to stop if the combined marginal uncertainty at the training data is sufficiently small (relative to the observation noise), i.e. $\\operatorname{tr}(K - K C_i K) = \\operatorname{tr}(K) - \\operatorname{tr}(\\tilde{D}_i^\\top K K \\tilde{D}_i) < \\delta_\\mathrm{unctol}$. Note that the vectors in the matrix $K \\tilde{D}_i \\in \\mathbb{R}^{n \\times i}$ are computed anyway as part of Algorithm 1. Storing them to use as a stopping criterion does not affect the asymptotic memory cost (which is $\\mathcal{O}(ni)$) and computing the stopping criterion has time complexity $\\mathcal{O}(ni^2)$.\n\n> 4. Figure S5 (b) in the supplementary hints are instabilities in certain settings. Do you know which parts of the algorithm lead to this, and can you say something more about when this could happen?\n\nWe've empirically only observed this so far for IterGP-PI, i.e. the choice of actions which most closely corresponds to SVGP on the KEGGundir dataset (see Figure S5b). We conjecture this may happen if the inducing points are chosen in such a way such that $S_i=K(X, Z)$ is numerically close to having rank $<i$. Algorithm 1 implicitly orthogonalizes the actions $S_i$ with respect to the $\\hat K$-inner product. We conjecture that our current implementation may become numerically unstable if the assumption of independence of the actions is numerically close to not being satisfied. In this case the error introduced by finite precision potentially affects the result of the reorthogonalization in step 8 of Algorithm 1.\n\n### Other\n> - A couple of lines in algorithm 1 are coloured grey, but there's no discussion of what this means [...].\n\nGreyed out quantities are *not necessary to compute the GP approximation*. We included them since their computation might be of independent interest (the kernel matrix approximation $Q_i$), or because they correspond to quantities we introduce in the derivation of the Algorithm (the belief over the representer weights $p(v_*)$).", " Thank you for your positive review. We are pleased you recognize the strengths of our paper. We attempt to answer any remaining questions below. If anything remains unclear, we would be happy to clarify during the discussion period.\n\n## Detailed Response\n\n### Questions\n\n> 1. Would the proposed algorithm work for distributed GP as well? If yes, what is the action $s_i$?\n\nAssuming distributed compute nodes are available, a direct way to leverage them for IterGP is to perform distributed matrix-vector multiplication. It's an interesting question to consider whether one can recover distributed Gaussian processes [Deisenroth2015] in the IterGP framework. In [Deisenroth2015], each compute node has access to part of the dataset, individual GP experts are trained and then combined. Training on only a subset of the dataset in the IterGP framework corresponds to actions $s_i$ which have non-zero entries only for the entries corresponding to those datapoints in the subset. We conjecture that similar to the distributed framework in [Deisenroth2015], the combination of the information propagated up from the individual nodes could prove challenging.\n\n> 2. The theoretical analysis assumes that the actions $s_i$ are linearly independent. Is this always true for the considered approximations?\n\nThis is true by construction for IterGP-Chol, IterGP-PBR and IterGP-CG. For IterGP-PI's actions $s_i = k(X, z_i)$, the vectors may be dependent for adversarially chosen inducing points (i.e. $z_1 = z_2$). \n\n> 3. Have you verified that the \"computational uncertainty\" term will go to zero when the computational resource is infinite?\n\nFor $n$ linearly independent actions the computational uncertainty goes to zero in at most $n$ iterations. To see this recognize that $C_n = \\hat{K}^{-1}$ and therefore the computational uncertainty $\\Sigma_n = k(x_\\star, X) \\Sigma_n k(X, x_\\star) = k(x_\\star, X) (\\hat{K}^{-1} - C_n) k(X, x_\\star) = 0$ in eqn (6).\n\n> 4. Do you have more experimental results obtained using other kernels, such as the spectral mixture kernel and some hybrid kernels?\n\nAny further experiments we performed beyond the main paper are in the appendix in Section S3. To be concise, we focused on the most commonly used kernels: RBF, Matern(1/2) and Matern(3/2).\n\n> 5. In the conclusion, you mentioned that the proposed algorithm is particularly useful for online data processing, which is not clear to me?\n\nRunning IterGP on a large dataset with actions not targeting part of the dataset at all (i.e. having zero entries for the corresponding datapoints) is equivalent to not having observed that data yet. In that sense, Algorithm 1 is inherently online. We make this precise in Theorem S7.\n\n### References\n- [Deisenroth2015] Deisenroth, Marc, and Jun Wei Ng. \"Distributed Gaussian processes.\" International Conference on Machine Learning (ICML), 2015.", " \nThank you for your review and the suggestions for improvement. We will use the extra page available for the final version to expand the explanation of how our method is derived and the connection to other GP approximations. We hope to answer any raised questions below. If anything should remain unclear, we would be happy to follow-up during the discussion period.\n\n## Detailed Response\n\n### Questions\n> - It is not clear that the marginalization of $v_*$ below Eq. (2) gives the GP prior. The mean would be zero mean after the marginalization, using Eq. (2). The resulting variance will have to add the variance of the mean. Can the authors clarify this? It seems Eq. (2) is missing the GP mean.\n\nThank you for pointing out the typo in eqn. (2). It should contain the prior mean function $\\mu(x_\\star)$ and therefore read\n\n$$p(f_\\star \\mid v_*) = \\mathcal{N}(\\mu(x_\\star) + k(x_\\star, X)v_*, k_*(x_\\star, x_\\star))$$\n\nNow, if we marginalize out $p(v_*) = \\mathcal{N}(v_*; 0, \\hat{K}^{-1})$, then the resulting marginal $p(f_\\star) = \\int p(f_\\star \\mid v_*)p(v_*)dv_*$ has mean $\\mu(x_\\star) \\eqqcolon \\mu_\\star$ and covariance \n\n$$k_*(x_\\star, x_\\star) + \\underbrace{k(x_\\star, X) \\hat{K}^{-1} k(X, x_\\star)}_{\\text{uncertainty from marginalization}} = k(x_\\star, x_\\star) - k(x_\\star, X) \\hat{K}^{-1} k(X, x_\\star) + k(x_\\star, X) \\hat{K}^{-1} k(X, x_\\star) = k(x_\\star, x_\\star) \\eqqcolon K_\\star$$\n\nThe mean and covariance after marginalization are now precisely the prior $\\mathcal{GP}(\\mu, k)$ evaluated at the new datapoints $x_\\star$. Therefore it should correctly read in l87: \"recovers the GP prior $\\mathcal{N}(\\mu_\\star, K_\\star)$.\"\n\nWe realize that the visual differentiation between $\\*$ and $\\star$ may be difficult. To avoid confusion, we will update our notation to more clearly distinguish between the mathematical posterior (denoted by $*$) and new data points to predict on (denoted by $\\star$). \n\n> - Above Eq. (3) should not $r_i$ be $r_{i-1}$?\n\nYes it should be. Thank you for pointing this out.\n\n### Other\n> The authors have indicated that they have described the limitations of their method. However, they do not indicate where in the manuscript.\n\nWe describe limitations of IterGP as compared to linear time approximations in Section 2.2: \"The Cost of Computational Uncertainty\".", " \nThank you for your review and the discussion points you raise. We want to first point out a key technical oversight in your review that may have lead to a more negative opinion of this work: **Theorem 2 does not apply to other GP approximations, which therefore do not quantify combined and computational uncertainty.** See our detailed responses below. If we answer your questions, we would appreciate if you'd consider updating your score. If anything remains unclear, we are happy to clarify in the discussion period.\n\n## Detailed Response\n\n### Q1: Impact of the Policy\n> The first [missing discussion] is the impact of the policy.\n\nAs is illustrated in Figure S2 the choice of policy determines \n\n- where computation in input space is targeted, and thus \n- where the combined posterior contracts first (see e.g. IterGP-Chol), and\n- whether the error in the posterior mean or (co-)variance are predominantly reduced first (compare IterGP-CG vs IterGP-PBR / IterGP-PI)\n\nThus the policy choice is application-dependent. If I am solely interested in the predictive mean, I may choose IterGP-CG. If my goal is UQ (e.g. for active learning) I may choose IterGP-PI. Such a choice is not unique to IterGP, but necessary whenever we select a GP approximation. What IterGP adds is computation-aware, meaningful uncertainty quantification (in the sense of Theorem 2).\n\n### Q2: Combined Uncertainty vs. Approximate Posterior Variance\n\n> I don't see the benefit of the so-called combined uncertainty, if it's just equal to the approximate posterior variance. \n\n**The combined uncertainty** (with $C_i \\approx \\hat{K}^{-1}$) **as opposed to the approximate posterior variance** (arbitrary $L \\approx \\hat{K}^{-1}$) **guarantees that**\n\n1. the marginal variance of the exact GP is never underestimated (see eqn. 6 and Figure 1 bottom).\n2. the combined uncertainty is a worst case bound on the error to the true latent function (see eqn. 13)\n\nIn that sense, the combined uncertainty describes precisely how uncertain we should be, given our approximate posterior mean (see Theorem 2). These properties do not hold for general GP approximations (as illustrated for SVGP in Figure 1 and discussed below).\n\n> The second [missing discussion] is distinguishing the combined uncertainty (in eqn. 6) from the regular approximate posterior variance at test points.\n\n**The combined covariance is a special case of the approximate posterior covariance, _but_ the specific properties of $C_i$ are crucial.**\n\nThe approximate posterior std. deviation generally *cannot* be decomposed into a term bounding the error to the latent function (combined uncertainty) and a term bounding the error to the mathematical posterior mean (computational uncertainty). This is only possible if $L \\approx \\hat{K}^{-1}$ satisfies eqn. (S42) (used in the proof of Theorem 2 in lines 704 and 706), as is the case for IterGP where $L=C_i$.\n\n\n### Q3: Applicability of Theorem 2 to Other GP Approximations\n\n> [...] since other approximate GP inference techniques have their own approximation of $\\hat{K}^{-1}$, their approximate posterior variance can also be used to give worst-case bounds in the sense of Theorem 2.\n\n**Theorem 2 does not apply to other approximate GP techniques.**\n\nMost GP approximations (Nyström / SVGP, RFF, NNGP) do *not* satisfy eqn (S42) and therefore *do not satisfy Theorem 2*. If Theorem 2 were to apply to SVGP, then SVGP would *provably* not underestimate uncertainty. This would directly contradict the literature on SVGP [Bauer2016, Huggins2019] and the illustration in Figure 1. However, there may be a close IterGP analog, which improves the UQ as we show for SVGP -> IterGP-PI.\n\n> [...] Skimming through Theorem 2’s proof, there is no part of the proof that relies on the actual structure of IterGP, in the sense that the statement will hold if we replace $C_i$ with any other approximation of $\\hat{K}^{-1}$. \n\n**This is incorrect. The proof of Theorem 2 relies on the fact that $C_i \\hat{K} C_i = C_i$ (eqn S42).** See lines 704 and 706.\n\nEqn. (S42) is satisfied if $C_i\\hat{K}$ is the $\\hat{K}$-orthogonal projection onto the space spanned by the actions. Algorithm 1 precisely constructs this projection for a given sequence of actions. Since orthogonal projections are unique, *if another GP approximation $L \\approx \\hat{K}^{-1}$ is such a projection, it is an instance of IterGP.*\n\n\n### Other\n> Lines 83-84: Equation 2 is true if the prior mean is zero.\n\nThanks for pointing this out! We've added the missing prior mean in the final version.\n\n> Lines 96:97: the contraction only happens for a well-chosen set of actions $s_i$, not in general\n\nThe posterior contracts for any policy, which generates linearly independent actions $s_i$ (see Proposition S4). \n\n> Lines 119-120: it’s not true that the algorithm is online. Line 9 of Algorithm 1 requires the whole kernel matrix $\\hat{K}$\n\n**Algorithm 1 is online as we prove in Theorem S7**, but can be more concisely written via the whole kernel matrix.", " The authors present a novel method, named IterGP, for doing approximate inference in Gaussian processes. Taking a probabilistic numerics approach, they treat the representer weights, $\\mathbf v_* = \\hat{\\mathbf K}^{-1} (\\mathbf y - \\boldsymbol \\mu)$, as a quantity to iteratively compute. By building on recent advances in probabilistic linear solvers, they derive an algorithm for iteratively updating a distribution over the representer weights, which captures the (computational) uncertainty in them. \nBy reparameterising the GP posterior in terms of the representer weights and then marginalising them out, the authors obtain a new GP posterior expression, which implicitly accounts for the uncertainty in approximation made by the iterative representer weights.\n\nWhile explicitly computing the computational uncertainty is expensive, computing the combined (computational and mathematical) uncertainty has a quadratic cost, meaning that IterGP computationally sits in-between exact GP inference (cubic) and linear-time approximation methods, which, however, do not provide calibrated uncertainty estimates. At the same time, the memory cost of IterGP is only linear.\n\nTo make the iterative updates, IterGP requires a policy for selecting and weighing data. The authors discuss different strategies and show how they correspond to standard approximation methods, such as conjugate gradients and inducing point methods, and extend these to properly account for computational uncertainty.\n\nThe paper concludes with a theoretical analysis, proving general convergence of the posterior mean as well as proving that the combined uncertainty is a worst-case bound on the error, and an empirical evaluation of IterGP, showing that it provides better posterior estimates than commonly used methods.\n **Strengths**\n\nThe paper overall is of very high quality. The writing is clear, and the figures are fantastic. I especially like the pedagogical use of colours for the different kinds of uncertainties, which are shared between figures, equations, and the text.\n\nFrom a technical perspective, the paper is very thorough, and the authors carefully explain the intuition behind the method where possible. They also provide a good discussion of the different policies for choosing actions and how these link back to commonly used approximation methods, showing a kind of unification (and extension) of these methods.\n\nThe theoretical analysis seems solid, but I did not check it in detail.\n\nWhile the field of probabilistic numerics is not my area of expertise, the work seems original and highly novel to me. Overconfident posteriors are a well-known issue in the approximate GP world, so the contributions in this paper make it a very exciting piece of work. It is definitely a significant contribution to the approximate GP literature, which will be of major interest to the NeurIPS community.\n\n**Weaknesses**\n\nWhile the writing is overall of very high quality, there are a lot of new concepts to digest for someone not in the field of probabilistic numerics (but who does have experience with GPs). A short background on probabilistic numerics (perhaps probabilistic linear solvers in particular) would have been helpful for me, but I understand that I might not be the target audience.\n\nOne thing that the authors did not discuss is how to choose the stopping criterion in algorithm 1. I suppose there are some trivial choices (e.g., if one has a fixed computational budget), but perhaps one can be smarter here. A brief discussion of this would have been nice.\n\nFor the experiments, it would have been interesting to see SVGP and IterGP compare in terms of wall-clock time for different numbers of inducing points/iterations. Since SVGP is faster to compute, I suppose it would achieve better results for very few inducing points. It would be interesting to see when IterGP begins to perform better for a fixed computational budget.\n\nAlso, it would be interesting to see the RMSE and NLL obtained by an exact GP indicated in both figure 4 and 5.\n\n**Various minor things**\n* In equation (2), I think a $+ \\mu$ is missing in the mean.\n* The next-to-last sentence in the caption of figure 2 appears incomplete.\n* A couple of lines in algorithm 1 are coloured grey, but there's no discussion of what this means, as far as I can see.\n NB I have quite limited experience with probabilistic numerics, so the following questions should be read in this context! :)\n\n1. I don't fully understand the definition of the residual in line 90. Specifically, why does it contain $\\hat{\\mathbf K}$ and not just $\\mathbf K$? I would have expected the latter if it's the residual between the observations and the GP posterior mean.\n\n2. I found the discussion of the overconfidence in SVGP extremely interesting, but I don't quite understand the point made in lines 142-146. Since the representer weights of the inducing points are typically calculated exactly, where does the error in the uncertainty estimate (i.e., the overconfidence) come from? (To be clear, this is absolutely not a crucial question; I'm just trying to understand SVGP :) ).\n\n3. I suppose the choice of a stopping criterion will be problem-specific, but do you have some suggestions for this? While the computational uncertainty is too expensive to explicitly compute in general, is it possible to somehow estimate it and stop when we have reached a pre-defined tolerance?\n\n4. Figure S5 (b) in the supplementary hints are instabilities in certain settings. Do you know which parts of the algorithm lead to this, and can you say something more about when this could happen?\n Potential negative societal impacts have not been discussed, but it is not necessary given the paper's theoretical nature.", " Gaussian process regression model is impractical for big data and various approximations have been proposed to alleviate such difficulty. However, people have overlooked the uncertainty due to the numerical approximations for saving computational resources. This paper presented a novel low-complexity algorithm that respects uncertainties arising from the finite number of data observed and the finite amount of computation expended. Besides, theoretical analyses are given to support the effectiveness of the proposed algorithm. Experimental evaluations were also conducted to verify the proposed algorithm with some widely used approximation methods, such as sparse GP. Strengths:\n1. A novel GP approximation that accounts for computational uncertainty.\n2. Well-crafted performance analyses (Th.1 and Th.2)\n3. Experimental evaluations in terms of a few widely used GP approximations. \n\nWeaknesses:\n1. In Eq.(6), I understand that the \"computational uncertainty\" term results in a combined uncertainty that is cheaper to evaluate; however, I doubt this term is solely due to the numerical approximation made to the standard GP. 1. Would the proposed algorithm work for distributed GP as well? If yes, what is the action $s$_i?\n2. The theoretical analysis assumes that the actions $s$_i are linearly independent. Is this always true for the considered approximations?\n3. Have you verified that the \"computational uncertainty\" term will go to zero when the computational resource is infinite?\n4. Do you have more experimental results obtained using other kernels, such as the spectral mixture kernel and some hybrid kernels?\n5. In the conclusion, you mentioned that the proposed algorithm is particularly useful for online data processing, which is not clear to me? \n Maybe more experiments with different kernel functions and different data lengths would be helpful. ", " This paper proposed a method to take into account extra uncertainty arising from using approximate GP solutions to scale GPs to large datasets. The proposed method is based on making inference about some parts of the computation of the posterior predictive distribution of a full GP. The authors indicate that the proposed approach is a generalization of many GP approximate methods. Theoretical results associated to the proposed method and derived and some experiments are carried out on synthetic and real-world data.\n Weaknesses:\n\n - The explanation of the proposed method is poor. The authors have to make a better job at explaining their method. Eq. (3) and the update on p(v*) has to be better explained.\n\n - Section 2.1 is poorly explained. It is not clear the connections with other methods.\n\n - The experimental section is a bit weak.\n\nStrengths:\n\n - Nice theoretical results are derived for the proposed method.\n\n It is not clear that the marginalization of v* below Eq. (2) gives the GP prior. The mean would be zero mean after the marginalization, using Eq. (2). The resulting variance will have to add the variance of the mean. Can the authors clarify this? It seems Eq. (2) is missing the GP mean.\n\nAbove Eq. (3) should not r_i bet r_i-1?\n The authors have indicated that they have described the limitations of their method. However, they do not indicate where in the manuscript.\n", " The paper proposes an approximate inference technique, called IterGP, for Gaussian progress (GP) regression. Methodologically, the key algorithm (Algorithm 1) provably constructs increasingly more accurate approximations (of the GP posterior) as the runtime increases. Each approximation also comes with a notion of approximation error (in the sense of Theorem 2). Empirically, IterGP can be computationally cheaper, but just as accurate as existing methods (Figure 4), or more expensive but more accurate compared to existing methods (Figure 5). I think the paper’s key ideas are original and strong, but I have some concerns about the quality and clarity of the manuscript.\n\n# Originality\nSection 2 is a novel way of using probabilistic linear solvers (PLS) to perform GP regression. Section 2 treats the so called representer weights as unknown/random, whereas existing works treat either the kernel matrix or the inverse of the kernel matrix as random. The randomness here is due to limited computation. Since the posterior mean at a test point is a function of the representer weight (Equation 2), the randomness in the representer weight propagates into the posterior mean and is computable as in Equation 6. The update equations 4 and 5 give the general skeleton of how to reduce the randomness in the representer weights. \n\n# Quality \nIn terms of strengths, the visualizations and plots in the paper are very legible and convey the message well. Figure 1 panel b breaks down uncertainty into two constituent parts. Figure 4 demonstrate how IterGP is cheaper than conjugate gradient (CG) GP but just as accurate. Similarly, Figure 5 is very easy to read and understand the takeaway.\n\nIn terms of room for improvement, I think the paper is missing three kinds of discussions. \n- The first is the impact of the policy. Although Table 1 mentions a list of policies, the rest of the paper does not discuss the pros and cons of different policies, or give recommendations on which to use. I do understand that in a sense the actions s_i are hyper-parameters of Algorithm 1, and tuning hyper-parameters is potentially its own problem. \n- The second is distinguishing the combined uncertainty (in Equation 6) from the regular approximate posterior variance at test points. What I mean by approximate posterior variance is that, while the true posterior variance at $x_*$ is \n$$k(x_*, x_*) – k(x_*, X) \\hat{K}^{-1} k(X, x_*),$$\nWhen we have an approximation of the inverse of the kernel matrix, say, $L \\approx \\hat{K}^{-1}$, then we announce the approximate posterior variance to be\n$$k(x_*, x_*) – k(x_*, X) L k(X, x_*),$$\nMy current impression is that the combined uncertainty is exactly the approximate posterior variance for the choice $L = C_i$, where $C_i$ is constructed using the IterGP iterative procedure. But more generally, all approximate GP approaches that have approximations to $\\hat{K}^{-1}$ will also have the combined uncertainty, and it’s equal to the approximate posterior variance that they report. \n- The final discussion is whether Theorem 2, in particular Equation 13, applies to other kinds of approximate GP inference techniques. Skimming through Theorem 2’s proof, there is no part of the proof that relies on the actual structure of iterGP, in the sense that the statement will hold if we replace $C_i$ with any other approximation of $\\hat{K}^{-1}$. In that case, since other approximate GP inference techniques have their own approximation of $\\hat{K}^{-1}$, their approximate posterior variance can also be used to give worst-case bounds in the sense of Theorem 2. \n\n# Clarity\nThe paper is well-written. There are some negligible typos that I will highlight in the “Questions” section.\n\n# Significance\nAdvances in scalable and accurate GP inference are important since GP regression is a staple of modern data analysis.\n I would like the authors to address my questions regarding the “quality” subsection of the “Strengths and Weaknesses” section. My biggest concern is being confused about the relationship between combined uncertainty and the approximate posterior variance. The answer to this question is important, because if the combined uncertainty is the same as the approximate posterior variance, then the claim in the abstract “[…] due to limited computation, is entirely ignored when using the approximate posterior” is not accurate, since approximate GP automatically accounts for both computational and mathematical uncertainties, through the approximate posterior variance / combined uncertainty. More generally, I don't see the benefit of the so-called combined uncertainty, it it's just equal to the approximate posterior variance. My confusion also spawned my question about whether Theorem 2 applies to other approximate GP techniques. It also makes me question whether the claim in Figure 1 is valid: if it were true that Theorem 2 also applies to SVGP, then the message of modelling computational uncertainty improves GP approximation is incorrect. Perhaps the cause of IterGP-PI performing better than SVGP is due to the later using inducing points, which can be inaccurate.\n\nThe list of typos I’ve found are as follows\n- Lines 83-84: Equation 2 is true if the prior mean is zero.\n\n- Lines 96:97: the contraction only happens for a well chosen set of policies $s_i$, not in general\n\n- Lines 119-120: it’s not true that the algorithm is online. Line 9 of Algorithm 1 requires the whole kernel matrix $\\hat{K}$.\n I do not see any negative societal implications to this line of work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 3, 4 ]
[ "GgBki4KHVIr", "TNEfXFm_JCs", "4KvCzVA5ba_", "ds3q3_ikncf", "PlPom84hajb", "ElqZ8lzBl8W", "5aflQp4nTU", "K3tbPQFcg6", "rXpXrUxHKFm", "nips_2022_Zzi8Od19DSU", "nips_2022_Zzi8Od19DSU", "nips_2022_Zzi8Od19DSU", "nips_2022_Zzi8Od19DSU" ]
nips_2022_9s3CbJh4vRP
Precise Regret Bounds for Log-loss via a Truncated Bayesian Algorithm
We study sequential general online regression, known also as sequential probability assignments, under logarithmic loss when compared against a broad class of experts. We obtain tight, often matching, lower and upper bounds for sequential minimax regret, which is defined as the excess loss incurred by the predictor over the best expert in the class. After proving a general upper bound we consider some specific classes of experts from Lipschitz class to bounded Hessian class and derive matching lower and upper bounds with provably optimal constants. Our bounds work for a wide range of values of the data dimension and the number of rounds. To derive lower bounds, we use tools from information theory (e.g., Shtarkov sum) and for upper bounds, we resort to new "smooth truncated covering" of the class of experts. This allows us to find constructive proofs by applying a simple and novel truncated Bayesian algorithm. Our proofs are substantially simpler than the existing ones and yet provide tighter (and often optimal) bounds.
Accept
This paper considers the problem of online learning with the logarithmic loss, and provides a new algorithm based on smoothing of the log loss which matches certain rates that were previously only achieved through non-constructive methods. Reviewers agreed that the algorithm and proof technique are novel, and that the resulting regret bounds improve over the state of the art. For the final version of the paper, the authors are encouraged to incorporate the reviewers' comments regarding presentation.
train
[ "LmTWlASJQuG", "6xl1RGCbBz1", "mKHWKuYYYrg", "RP7b2XSi81", "kLdUOMffq00", "5VCm_vJd7m", "2XkS45eWoNn", "bK_t8ccFlOR" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'd like to thank the authors to carefully address my concerns. I was the only reviewer that had a hard time with the presentation, so do not feel obligated to drastically change the paper because of it. My guess is that I am more acquainted with the study of (simulatable) experts in the online learning literature, and the language used made it hard for me to find my footing. Yet, since NeurIPS attendees have wide-ranging interests, having a more accessible introduction and presentation is always welcome. The overview you gave is quit helpful and I enjoyed the summary on the techniques used in the lower-bound.\n\nFinally, the other reviews, mainly from 75nX, helped me better understand the context of the contributions. I will increase my score to reflect my confidence that the authors will improve the presentation as much as space allows and to reflect my improved understanding of their contributions.", " Thanks to the authors for their comments. I agree with all of their responses to my review, and have no further questions.", " Thank you very much for the detailed review, helpful comments, and the list of typos.\n\n**Regarding novelty of global sequential covering**: We agree that the same idea was used in [RST10] and also in (Ben-David, Pal, Shalev-Shwartz, 2009). We believe it is worth defining the concept explicitly so that one can study the concept as an independent research object instead of a technical ingredient as implicitly used in the aforementioned papers. We will rephrase our statement of novelty to reflect this.\n\n**Regarding the Bayesian average and smoothing**: Thank you very much for providing the references. We agree that [BFR21] also used the idea of smoothing for controlling the unboundedness of log-loss. We will add the citations and corresponding literature. As noted by the reviewer, these results only consider the case when the covariates are presented i.i.d. More importantly, these results hold only for the average case minimax risk, i.e., the performance of the predictor is compared with the best expert *on average*. While in our paper, we compete with the best experts for any individual $x^T,y^T$.\n\n**Regarding the optimality of constant**: We agree that the tightness of the constants is not substantial for the case where an estimation of the sequential covering number is required since it is not known how to control the log factor. Our main purpose for the statement of optimality on the constant is to emphasize that one should not hope to improve the constants in the form $2 \\alpha T+ \\log |\\mathcal{G}|$ any further. However, it is quite possible that one can obtain other forms of bounds that improve this bound. Moreover, the optimality of the constants allows us to obtain the tightest bound with optimal leading constant for special cases, which may be of independent interest. Indeed, obtaining optimal constants has been long investigated in the information theory and machine learning communities.", " Thank you very much for the detailed review and helpful comments.\n\n**Comparison to \"An improper estimator with optimal excess risk in misspecified density estimation and logistic regression\"**: The problem considered in that paper is different from the problem we studied in our paper. Specifically, that paper considers the problem of standard supervised regression problem, where the goal is to generate a function $g : X \\rightarrow \\hat{Y}$ by observing i.i.d. generated samples $X^T, Y^T$ with the goal of minimizing the excessive risk. In our paper, we consider the prediction problem where the samples are generated sequentially, and more importantly, we have no assumption on the statistical mechanism for generating the samples (unlike the i.i.d. assumption in their paper). It is worth noting that a regret bound in our setting automatically implies an excessive risk bound in their setting by using the standard online-to-batch conversion technique. However, as noted in their paper, this conversion may sometimes provide suboptimal bounds in their setting.\n\n**Minor comments**: The citation [4, Chapter 3.3] refers to the ideas used in Theorem 3.2 and Proposition 3.1. However, these results in [4] only study the case for finite experts. Our Lemma 2 is an analog of these results for infinite classes (the proof uses exactly the same idea).", " Thank you very much for the detailed review. We apologize for the lack of clarity in our presentation. We note that a nine-page limit significantly inhibits our ability to present background and related results in a comprehensive and self-contained manner. We provide a brief overview of the literature below that may be more accessible for readers who are not intimately familiar with related work. We are happy to include it in the final manuscript.\n\n**Overview:** Sequential probability assignment under logarithmic loss is a fundamental problem that has been extensively studied in the information theory community, due to its close connection to universal compression. However, information theory only considers *simulatable* experts, i.e., experts that make predictions based only on previously observed labels (there are no features, $\\textbf{x}^T$). It can be shown that the regrets for simulatable experts are completely characterized by the Shtarkov sum. In (Rakhlin and Sridharan, 2015), the authors made an important extension of the framework to allow experts to make predictions that may depend on some side information (i.e., the features, $\\textbf{x}^T$). Their main techniques are the concept of (local) sequential covering and chaining. However, their approach only works when the losses are Lipschitz and bounded, which does not apply to log-loss. To deal with this issue, they used a hard truncation approach that truncates any function $h$ to $h'$ such that $h'(x) = \\delta*1(h(x) < \\delta) + h(x)*1(\\delta \\le h(x) \\le 1-\\delta) + (1-\\delta)*1(h(x)> 1-\\delta)$. This approach could reduce the gradient of log-loss from unbounded to bounded. However, it still scales as $1/\\delta$, where $\\delta$ is an additional parameter and is the source of suboptimality of their approach. (Bilodeau, Foster, Roy 2020) observed that by exploiting the self-concordancy of log-loss, one can bypass the truncation approach and obtain tighter bounds that are tight for specific classes (i.e., they can not be improved universally). However, their approach is non-constructive. Despite the non-triviality of the proof of (Bilodeau, Foster, Roy 2020), we showed in our paper, perhaps surprisingly, that their bounds can be achieved algorithmically (i.e., with an implementable algorithm) by using a simple smooth truncation approach (cf. Lemma 4), which is different from the truncation of (Rakhlin and Sridharan, 2015). Moreover, our bounds improve the constants of (Bilodeau, Foster, Roy 2020) from (4,4) to (2,1), which are tight for specific classes (i.e., the constants are not universally improvable). Our approach also uses the concept of global sequential covering, which was implicitly used in (Rakhlin, Sridharan and Tewari, 2010) and dates back to the ideas in (Ben-David, Pal, Shalev-Shwartz, 2009).\n\n**For the lower bounds based on Shtarkov sum**: The main observation is that when we have features $\\textbf{x}^T$ known in advance, the fixed design regret introduced in our paper is equivalent to the classical *simulatable* case, and more importantly, fixed design regrets are always lower bounds for the sequential regret. One can therefore analyze fixed design regret using Shtarkov sum by choosing some hard $\\textbf{x}^T$ (the selection of $\\textbf{x}^T$ is the main technical part). See, Theorem 5 and 6 for examples of how appropriately selected $\\textbf{x}^T$ leads to tight lower bounds.\n\n**Other comments**:\n* The dependency on the Lipschitz constant $L$ was considered by [11] where they have linear dependency on $L$, though their Lipschitz condition is on $\\log f$ instead of $f$. [3] did not explicitly mention the dependency on $L$, however, their result can also provide a logarithmic dependency but with a worse leading constant.\n* In line 214, the Lemma 5 is ensentially the results esteblished in [23, Section 6.1], by combining their Lemma 14 and 15.", " In this work the authors study bounds on the sequential minimax regret of sequential probability assignment against a set of expert predictors. They provide both lower- and upper-bound on the adversarial regret that either match or improve on previous bounds. For the upper-bounds, their main technique relies on the idea of \"sequential $\\alpha$-covering\" of a experts' class, which builds upon on similar idea from previous work, and using these covers in the classical Bayesian predictor algorithm. They further sharpen such bound when the experts' class is parameterized and Lipschitz over these parameters, and also give sequential covering for experts classes with low fat shattering number. Finally, they also provide interesting lower-bounds on the adversarial regret, which they lower-bound by lower-bounding the maximal minimax regret. ### Strengths\n- Although I am not very acquainted with the latest results in regret analysis of sequential probability assignment, this paper seems to have strong theoretical results: better bounds on the minimax sequential regret obtained by interesting ideas combining global sequential coverings (which expands on def of previous work) with Bayesian update rule;\n- They also prove lower-bounds that match many of their rates, but I can't say much about the techniques since they do not discuss it at length in the main body of the paper (and I didn't have the time to look into the appendix);\n\n### Weaknesses\n- Although I'm making a list, all weaknesses lie under the umbrella of **presentation**. I had a hard time going through this paper due to a collection of reasons (some of which I'll list). In general, the discussion of related work and of the techniques used is sub-optimal and very hard to go through if one doesn't have the other papers fresh in one's mind. A clear example is that only after reading the first couple of pages of [3] (Bilodeau, Foster, Roy 2020) and skimming their results I could digest the paper properly;\n- About high-level presentation problems, nowhere in the main body of the paper the authors discuss how they use Shtarkov sums to give lower-bounds. They go through the trouble of even defining it, but only use it on Theorem 1 to prove an upper-bound. Since I do not have time to look through the appendix I'm left wondering why Shtarkov sums are helpful in the l\n- For an example of parts where I had trouble reading, I'll walk through my experience in the introduction. The first paragraph of the introduction introduces the problem in a very general (player predicts \"labels\" instead of probabilities). The second paragraph makes the definition precise and general, and right after already specializes to probability prediction, so it makes one wonder if the generality of labels was even necessary (even more so in the introduction!). Yet, until this point I was still following. When the authors introduce fixed and sequential design, I had to spend quite some time parsing the notation and understanding the differences since the text does not seem to try to guide our understanding. At this point I had to spend more effort than usual just to understand *the introduction* of the paper. But then the \"regrets in information theory\" section was very dense and even after reading the paper I don't think it helps me place the contribution of the authors in the literature nor does it help me understand the motivation/\"hardness\" of bounding the regret. In fact, the related work subsection that comes later does a better job (but I'll comment on this later), and I don't know what is the purpose of the section of regret in info theory. Yet, by the end of the introduction I was not sure if I had understood the problem very well, and I definitely could not quite put my finger on what were the bounds that the community had worked on before. That is when I went through [3], whose introduction and defs I could read without much effort, and it helped me understand the motivation and setting much better;\n- When comparing your results and techniques with other works, the authors seems to write as if we had these other work fresh in our minds. Even though I do not expect you to explain all the previous work in your paper, more formally stating their bounds or techniques so that I do not need to open their papers to understand what you mean in some passages of your paper would be very helpful. A couple of examples: (1) Before Thm 1 you mention something about trees in [23], but without opening the paper a reader cannot even understand why it is a problem or what it means. After Thm 1 you also say that you obtain better constants if compared to [3], but without opening their paper I do not know how much better theses constants are and whether they analyze a more general case. At least stating the constants [3] gets should not take much space; (2) In example 1 it is not immediately clear what constants are being discussed (although one can infer which constants after reading the rest of the paper) and the authors also mention logarithmic dependency on $L$, but they never mention what were the dependency on $L$ on previous work (was it better? worse?)\n\n\nSome briefer comments on presentation:\n- The discussion between lines 167 and 170 should be a proposition, and the authors could be more explicit on what \"universally\" means;\n- The definition of Big-Oh notation in lines 120 and 121 is a bit wrong (it is not for all $t \\geq 0$, only for sufficiently big $t$);\n- \"function is self-concordance\" -> \"function is self-concordant\"\n- Lines 205-210 are extremely hard to parse (and formatted weirdly);\n- Line 213: cardinally -> cardinality\n- In line 214 you cite [23] but it only made me confused: is it that the next lemma is due to them, or they do something related?\n\n### Summary\nI think this is a technically strong paper which is very hard to parse for readers not well-acquainted with the latest work on this topic. I'm favorable towards acceptance. For now I only have my score in \"weak accept\" because I have a hard time assessing impact (even after skimming a bit of related work), but I might increase my assessment during the rebuttal phase.\n\n\n### Post-rebuttal\nIncrease my score from 6 to 7 based on other reviews and authors responses. This is definitely a strong paper and my initial problems with presentation will be addressed as much as possible. Unfortunately I do not have interesting technical questions for the authors. However, I'd like to know whether and how the authors plan to improve the presentation of the paper based on what I've discussed. Also, if you could give some intuition on how Shtarkov sums are used to prove lower-bounds, I'd be very interested to hear. However, I know the authors will be time-constrained during the rebuttal and I don't think my review will be a problem for the acceptance of the paper, so feel free to not give a thorough rebuttal to my review. Although the discussion of related work is hard to parse and sometimes limited, the authors do discuss some limitations of their results and future directions of research.", " The paper presents new tight lower and upper bounds for online regression with the log loss, with new and simple proofs. For the upper bounds, the authors prove their result using a new and simple truncated Bayesian algorithm. This analysis of this algorithm involves a new complexity measure---the Global sequential covering number---which the author show is upper bounded by the sequential fat-shattering number. For the lower bound, the author use the fact that the sequential minmax regret is lower bounded by the regret in the fixed design setting (known covariates), which they further lower bound using concepts from information theory such as the Shtarkov sum. The paper presents improved upper and lower bounds for the problem of online regression with the log-loss. The approaches used to achieve these results are new and simple, which constitute a solid contribution. How do your results compare to e.g. \"An improper estimator with optimal excess risk in misspecified density estimation and logistic regression\"? The presentation can be improved in places. Here I give a couple of suggestions (some minor):\n- Line 141: [4, Chapter 3.3] are you referring to Eq 4?\n- Lemma 4 should refer to the specific truncation mechanism in Algorithm 2 (not just there exists). This is required in the proof of Theorem 1.\n- To be consistent with the notation I would use $(${$0,1$}${}^d)^*$ instead of {$0,1$}${}^d_*$. \n- The fact that the global sequential covering number can be bounded by the fat-shattering number should be mentioned earlier in the presentation (perhaps around the definition of the global sequential covering number).", " The authors consider the problem of sequential prediction of binary data scored with log loss against potentially large classes of reference predictors. Using the method of [RST10] to sequentially construct covers, they algorithmically recover minimax rates that previously were only achieved non-constructively. They also improve constants of (4,4) to (2,1), which they prove are sharp for some classes that have uniform (non-sequential) covers. ## Review Summary\n\nThe main contribution of the work, which is algorithmically achieving the best-known minimax rates for this problem, is a significant contribution to the literature. I have minor comments regarding (a) the authors framing of novelty, as all of the algorithm components exist in the literature already (this is not a bad thing, but warrants stronger citations); and (b) the authors claims of generically tight constants, since it is not clear that the results are even tight in terms of log factors for classes that cannot be uniformly bounded over the covariate space (such as the high-dimensional linear functions example, in contrast with Lipschitz functions on bounded sets). I am confident these concerns can be addressed in the revision period.\n\n## Concerns to be addressed\n\nMy first concern is regarding the claims of novelty in this work. The notion of global sequential covering, while concisely defined in Lemma 1, is not novel in my opinion. As the authors cite (unfortunately, only in parentheses and on page 9), this notion was exactly considered by [RST10] in their Section 6.1, who also already proved Lemma 5 (their Lemma 15), albeit with slightly worse constants inside (eventual) log factors. Similarly, the distinction between this notion of sequential covering and the one used for nonconstructive minimax rates (by, e.g., [BFR20]) is only consequential for the algorithmic aspects (which, as noted, is significant and the main contribution of this work is to provide an “implementable” algorithm). This means that Eq. (6) (up to an improvement of constants 4,4 to 2,1) is exactly the same as the main result of [BFR20], since we currently only know how to use this regret bound by further upper bounding it by either the uniform metric entropy (for classes that can be covered without knowledge of covariates) or by the fat-shattering number (for classes that cannot), and both notions of entropy are bounded in the exact same way.\n\nThe idea of Bayesian averaging a cover with smoothing has also already been used in the literature to obtain tight rates for log loss regret [see BFR21], and is implicit at least as far back as [YB99] (see the discussion after their Lemma 2); notably, these works are only in the i.i.d. setting, although the smoothing is purely used in the present work to deal with the unboundedness of log loss (the adversarial sequential aspect is handled by the existing cover notion). The truly novel result in the present work is Lemma 4, which allowed the authors to apply these existing notions to the present problem specifically for binary responses. I believe the rhetoric and citations can be updated to reflect this history.\n\nMy second concern is regarding the claims of general optimality for the constants. The authors have multiple results of classes for which they prove sharp leading constants. However, for each of these, the class is already uniformly bounded in sup norm over the covariate space, and hence standard notions of covering could be used to control the size, with no need to consider sequential covers (of either kind). For classes that actually require the sequential covering notion, such as the linear classes that require an empirical covering (i.e., it depends on the observed covariates), it is unclear whether even the log factors are tight, let alone the constants. Further, as shown by [BFR20, Corollary 4] (and repeated after Theorem 6 in the present work), sequential fat-shattering (which both [BFR20] and the present work have to use to control their respective covering notions) is insufficient to fully characterize minimax regret. In summary, I think the claim that “our bounds are optimal on the constants” after line 167 is unsubstantiated.\n\n[RST10] - Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Random averages, combinatorial parameters, and learnability. Advances in Neural Information Processing Systems, 23, 2010.\n\n[BFR20] - Blair Bilodeau, Dylan Foster, and Daniel Roy. Tight bounds on minimax regret under logarithmic loss via self-concordance. In International Conference on Machine Learning, pages 919–929. PMLR, 2020.\n\n[BFR21] – Bilodeau, Foster, and Roy (2021). Minimax Rates for Conditional Density Estimation via Empirical Entropy, arXiv:2109.10461\n\n[YB99] – Yang and Barron (1999). Information-theoretic determination of minimax rates of convergence, Annals of Statistics.\n\n## Typos\n\n- The definition of regret after line 32 is slightly inconsistent with the usage that follows it, and should be better defined as $R(\\phi^T, y^T, \\mathcal{H} | x^T)$.\n- line 139, “reference class map” is undefined, although reasonably clear; I hope the authors can just be a bit more precise with the wording\n- Set notation for sets of functions is somewhat inconsistent, eg write $\\mathcal{G} \\subseteq [0,1]^{\\mathcal{X}^*}$ in Definition 1 to match $\\mathcal{H}$\n\n\n\n I raised all my questions in the body of the review. I see no negative societal impacts of this theoretical work." ]
[ -1, -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, 2, 3, 5 ]
[ "kLdUOMffq00", "mKHWKuYYYrg", "bK_t8ccFlOR", "2XkS45eWoNn", "5VCm_vJd7m", "nips_2022_9s3CbJh4vRP", "nips_2022_9s3CbJh4vRP", "nips_2022_9s3CbJh4vRP" ]
nips_2022_UVF3yybAjF
Robust Testing in High-Dimensional Sparse Models
We consider the problem of robustly testing the norm of a high-dimensional sparse signal vector under two different observation models. In the first model, we are given $n$ i.i.d. samples from the distribution $\mathcal{N}\left(\theta,I_d\right)$ (with unknown $\theta$), of which a small fraction has been arbitrarily corrupted. Under the promise that $\|\theta\|_0\le s$, we want to correctly distinguish whether $\|\theta\|_2=0$ or $\|\theta\|_2>\gamma$, for some input parameter $\gamma>0$. We show that any algorithm for this task requires $n=\Omega\left(s\log\frac{ed}{s}\right)$ samples, which is tight up to logarithmic factors. We also extend our results to other common notions of sparsity, namely, $\|\theta\|_q\le s$ for any $0 < q < 2$. In the second observation model that we consider, the data is generated according to a sparse linear regression model, where the covariates are i.i.d. Gaussian and the regression coefficient (signal) is known to be $s$-sparse. Here too we assume that an $\epsilon$-fraction of the data is arbitrarily corrupted. We show that any algorithm that reliably tests the norm of the regression coefficient requires at least $n=\Omega\left(\min(s\log d,{1}/{\gamma^4})\right)$ samples. Our results show that the complexity of testing in these two settings significantly increases under robustness constraints. This is in line with the recent observations made in robust mean testing and robust covariance testing.
Accept
The reviewers and I agree that this result is a solid, if not groundbreaking, result in the theory of robust statistics. It considers a very natural testing problem, and when the fraction of corrupted samples is constant, completely settles the statistical complexity of the problem. There are some concerns that its immediate practical impact is limited, but overall, the consensus is that the paper represents a solid technical contribution, and will be of interest to the robust statistics community, and the learning theory community at large. Therefore, we believe this paper is above the bar for acceptance at NeurIPS.
val
[ "Fif6n9dyjL", "ON50mzBeEmbu", "yUDyie5CpUI", "QbS53lfqBrG", "WKa9WPPp-bTu", "2QtiQK4tY9B", "ozLDMBNOq_G", "JydQ_ggJr3I", "HYlynQ6B8vj", "UIZLfgwUH8R" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'm happy with the author response and will maintain my score.", " Your response addresses most of my questions and helps my assessment of the significance of your results. \nDue to the overlap in the techniques required to prove the current lower bounds with lower bounds in the non-sparse setting, I will maintain my score.", " We thank the reviewer for their time and thoughtful comments. We address their specific questions below. \n\n**Response to Questions**\n\n1. We thank the reviewer for bringing up this important point. For the lower bounds in Theorems 4 and 5, the dependence our proof yields with respect to $\\varepsilon$ and $\\gamma$ is $\\Omega(\\varepsilon^4/\\gamma^4)$. Importantly, the exact dependence on these parameters is still an open problem even for “dense” robust Gaussian mean testing (non-sparse). We conjecture that the actual dependence on these parameters is $\\varepsilon^2/\\gamma^4$, and hence we believe that our dependence on these parameters is not tight (since we have $\\varepsilon = o(\\gamma)$). However, this is not just an artefact of our analysis: our current techniques (as well as the ones from previous work on the \"dense\" robust testing case) cannot yield better than an $\\Omega(\\varepsilon ^4/\\gamma^4)$ dependence (we refer the reviewer to the response to reviewer PDuZ for additional details). We will elaborate on these points in the final version of the paper. \n\n2. Thank you for this query. We agree that handling the case of arbitrary covariances is a natural and important question; however, we believe that deriving the sample complexity in the case of non-identity (and unknown) covariance matrices would require significant additional effort, and new techniques and ideas, and thus we left it for future work. \n\n It is worth pointing out that we are not aware of any work addressing the sample complexity results for non-identity covariances even in the non-robust (but sparse) setting. The closest which comes to mind is the problem of robustly testing the covariance of a Gaussian, as discussed in [DK21]. We will try to include a discussion regarding this in the final version. \n\n3. We believe that the presence of phase transition in the non-robust setting is due to the dominance of two different behaviors based on the value of sparsity parameter $s$. On the one hand, for an $s$-sparse signal, if we knew the non-zero coordinates a priori, then the testing problem is no different from the testing of a $s$-dimensional signal. So, the sample complexity would be $O(\\sqrt{s})$. However, the number of samples required to detect the non-zero coordinates scales as $\\tilde{O}(s)$ (where $\\tilde{O}$ hides polylogarithmic factors in the argument). This is because the magnitude of each coordinate scales (in the worst case) as $O(\\gamma/\\sqrt{s})$ and therefore requires $O(s)$ samples to achieve the same Signal to Noise Ratio after averaging. So this strategy leads to an $\\tilde{O}(\\sqrt{s}+s) = \\tilde{O}(s)$ sample complexity. On the other hand, there is always the option of ignoring the sparsity altogether and performing the testing of the $d$-dimensional mean in that case has sample complexity $O(\\sqrt{d})$. Combining the two leads to a non-robust mean testing sample complexity of $\\tilde{O}(\\min(s,\\sqrt{d}))$, leading to the two regimes, with a transition at $s=\\sqrt{d}$. As it turns out, this simple two-pronged approach is actually optimal in the non-robust case. \n\n However, the phase transition vanishes in the robust case because we can always match the first moment of the input distribution (a Gaussian with non-zero mean vector) to that of a zero-mean Gaussian by choosing a suitable adversarial distribution. Hence we will have to rely on higher moments for testing, which increases the sample complexity: there is no longer an $O(\\sqrt{d})$ upper bound option, as there was in the non-robust case. \n\n**Response to Suggestions**\n\n1. Thank you for this suggestion. The algorithm has exponential time complexity in $s$ and polynomial in $d$ and $\\gamma$. We will add this clarification in the final version.\n\n2. Thank you for pointing out this typo. We will correct it in the final paper.", " We thank the reviewer for their time and feedback. We also thank the reviewer for going through the proofs in the paper. We address the specific question below. \n\n**Response to Questions**\n\n> Is it entirely accurate to say that the testing is ``much harder than its non-robust counterpart'', given that the sample complexity agree in the very sparse regime $s<\\sqrt{d}$ ?\n\nWe thank the reviewer for raising this important aspect. Indeed, testing becomes much harder only in the dense regime ($s>\\sqrt{d}$). In the sparse regime ($s<\\sqrt{d}$), the difference in sample complexities between the non-robust and robust cases is at most logarithmic. We will clarify the quoted statement in the final version of the paper. \n", " We thank the reviewer for their time and thoughtful comments. We address the specific questions below. \n\n**Response to Questions**\n\n> Do the lower bound techniques get the right $\\gamma$ scaling as well? If not, where are the bottlenecks in the argument that preclude this?\n\nWe thank the reviewer for bringing up this important point. For the lower bounds in Theorems 4 and 5, the dependence on $\\varepsilon$ and $\\gamma$ is $\\Omega(\\varepsilon ^4/\\gamma^4)$. The exact dependence on these parameters is still an open problem even for robust Gaussian mean testing (non-sparse). We conjecture that the actual dependence on these parameters should scale as $\\varepsilon ^2/\\gamma^4$, and hence we believe that our dependence on these parameters is not tight (since we have $\\varepsilon = o(\\gamma)$). In the current proof, the bottleneck appears while obtaining an upper bound for the chi-squared correlation between two Huber-contaminated distributions, but we suspect that more advanced techniques (like Moment Matching, for higher moments) might be required to get a tight dependence on $\\gamma$ and $\\varepsilon$. We will include these points as a remark in the final version of the paper.\n\n> Is Brennan-Bresler's lower bound for robust SLR also a lower bound against robust SLR testing?\n\nWe believe that this is the case in the sparse regime ($s=o(\\sqrt{d})$). However, we have not gone through their arguments rigorously, and will confirm this before finalizing the paper. \n\n> Minor typos\n\nWe are grateful to the reviewer for bringing these typos to our attention. We will correct these in the final version of the paper.\n", " We thank the reviewer for their time and comments. We address the specific questions below, and hope that the reviewer will reconsider their score in light of our answers. \n\n**Responses to Questions**\n\n1-4: We thank the reviewer for catching these errors. In the paper, $\\mathbf{p}_0$ refers to the reference distribution (simple null hypothesis, in our case the standard Gaussian). $\\tilde{\\mathcal{D}}$ is a typo--it should have been $\\tilde{\\mathcal{D}}_s$. We will correct these in the final version. \n\n5: We thank the reviewer for this question. We acknowledge that this would be a very valuable state-of-knowledge (SoK) or survey paper. However, considering the fact that the focus of our paper is on deriving lower bound, we think that these experiments would help little in illustrating the results. Indeed, our lower bounds are achieved by already known algorithms for the corresponding learning problems (hence the message that “robust testing is as hard as learning”), and we believe that implementing the algorithms from these previous papers to show how they perform empirically on this different learning task would distract from our own results and focus.\n\n6: Thank you for this suggestion. We will try our best to include more interpretation and visualization of our results in the final version (specifically, as a figure/phase diagram showing the various regimes of the sample complexity), subject to space constraints.\n\n7: Indeed: agnostic hypothesis selection via tournaments algorithm in [Li17] and Theorem 2 of our paper achieve the sample complexity of our lower bound for robust sparse Gaussian mean testing. The same algorithm can be used for robust testing in sparse linear regression model in view of the results in [LSLC20, Theorem 2.1] which states that any algorithm for robust sparse mean estimation can be used for robust sparse linear regression with a polylog($1/\\gamma$) increase in the sample complexity. However, these algorithms are computationally inefficient. The best-known computationally efficient algorithm takes $O(s^2)$ samples [BDLS17], and this is conjectured to be inherent to these problems (we discuss these works, and the corresponding evidence backing up this conjecture, in Section 1.3).\n\n [LSLC20] Liu Liu, Yanyao Shen, Tianyang Li, and Constantine Caramanis. High dimensional robust sparse regression. PMLR 108:411-421, 2020. \n\n**Response to Limitations**:\n\n1: This is a great point: we will include a discussion of limitations and future work in the final version, and outline some of them now. One of the main limitations of our work is that the sample complexity is not tight in parameters $\\varepsilon$ and $\\gamma$ (see responses to reviewers jAix and PDuZ and for a discussion of this dependence). Importantly, to the best of our knowledge this dependence is unknown even in the non-sparse case. Hence, this is also one of the main future directions to explore; we believe that new techniques will be required to address it. \n Another line to pursue further would be to find the sample complexities of these problems when the covariance of the Gaussian is non-identity (see response to reviewer jAix). We are not aware of sample complexity results even in the non-robust (but sparse) setting where the covariance matrix is non-identity and unknown.\n\n2: We thank the reviewer for pointing out this, and elaborate on some of the motivation below. Testing the norm of the mean vector is of fundamental interest to signal processing and statistics community, where it falls under the general name of Gaussian Location Model (GLM), and has been the subject of a long line of works. Furthermore, sparse signals, as a model and class of distributions, play a significant role in many applications – whenever applicable, they significantly reduce the sample and computational complexity. A few of the most relevant or initial papers in that respect include [Ing97], [DJ04]; see also, e.g., [ITV10] for the sparse regression question, or [CCC+19]. The setting in our paper corresponds to robustly detecting the presence of a high-dimensional sparse signal with high confidence, thus combining the practical motivations of robust statistics (data is routinely noisy, and modeling assumptions never exactly hold) to the motivations for sparse testing and sparse regression. We will detail these points further and add the corresponding citations in the final version of this paper. \n\n [Ing97] Ingster, Y.I. (1997). Some problems of hypothesis testing leading to infinitely divisible distributions. Math. Methods Statist. 6 47–49. \n [DJ04] Donoho, D.L. and Jin, J. (2004). Higher criticism for detecting sparse heterogeneous mixtures. Ann. Statist. 32 962–994. \n\n3-4: We refer the reviewer to our answers to questions 5 and 7 for these two comments. We would be happy to elaborate further on any of the points raised above; we hope our answers address the reviewer’s concerns, and will lead them to re-evaluate their score. \n", " The paper focuses on two problems within the domain of robust statistics:\n1. Sparse Gaussian mean testing: the goal is to robustly testing whether the mean of a Gaussian is zero or far from zero when we are guaranteed that the mean vector $\\theta$ satisfies some form of sparsity codified as $\\|\\theta\\|_q = s$ for some $q \\in [0,2)$.\n2. Testing in the sparse linear regression model: given $x \\sim \\mathcal{N}(0,I_d)$ and a sparse $\\theta$ we have $y = x^\\top \\theta + \\epsilon_i$ and our goal is to test whether $\\|\\theta\\|_2 = 0$ or $\\|\\theta\\|_2 \\ge \\gamma$.\nIn both problems we have that an $\\epsilon$ fraction of samples are arbitrarily corrupted by an adversary.\n\nThe main results of the paper are information theoretic lower bounds for the above tasks which depend on $s,d$ (the authors focus on the regime where $\\gamma,\\epsilon$ are constants and the dependence of the bounds on these quantities is not focused upon).\nThe non-robust versions of these questions show a dependence of the following form (when $q=0$):\n1. $\\Theta\\left(s\\log(1+d/s^2)\\right)$ if $s < \\sqrt{d}$,\n2. $\\Theta\\left(\\sqrt{d} \\right)$ if $s \\ge \\sqrt{d}$.\nIn particular, the testing version of the problem can always be solved more efficiently than the learning/estimation version. However, in the robust variant of these problems the results of this paper show that this is no longer the case. Irrespective of $s$, the robust sparse Gaussian mean testing problem requires $\\Omega(s\\log(d/s))$ samples.\n\nIn addition to the $q=0$ setting, the authors also provide upper and lower bounds to these problems to the $q \\in (0,2)$ settings.\nThe main message of the story remains quite similar in the sparse linear regression testing problem as well.\nAmong the techniques used by the authors are the well-known Le Cam's two point method, some inequalities connecting different distance metrics to each other, and some inequalities which help bound the $\\chi^2$ distance of a product of a mixture of distributions with another product distribution. **Strengths:**\n1. Sparsity and robustness are two important sub-fields within Statistics research today which the paper focuses on.\n2. The paper's contributions are original. In addition, the paper is well written and clear to read.\n3. The authors provide a thorough and complete theoretical analysis of the problem leading to a tight understanding of the sample complexity along with an extensive survey of all related work.\n\n\n**Weaknesses:**\n1. Even within the scope of robustly testing a sparse Gaussian the paper focuses on a specific niche problem of mean testing. It would be nice to see some discussion around the setting of non identity covariance matrices and testing between two Gaussians with different covariances.\n2. The paper presents a nice overview of the techniques used and has a number of results. But it is not fully clear where the main technical challenges arise in this setting. In particular, it is hard for me to understand why the problems in the robust setting do not exhibit a phase transitionary behavior in the sample complexity. Given this, I am a little unsure on the significance of the results of the current paper. **Questions:**\n1. What are the dependences of the bounds on $\\epsilon,\\gamma$ in Theorems 4 and 5? How tight are these?\n2. It would be nice to see some discussion around the setting of non identity covariance matrices and testing between two Gaussians with different covariances.\n3. The sparse testing problem exhibits a phase transition in its sample complexity as a function of sparsity. Namely, we only see benefits of sparsity until the sparsity parameter reacher $\\sqrt{d}$. It is intriguing that a similar phase transitionary behavior doesn't occur here and the situation seems to linearly degrade with $s$ for all ranges of $s$. Is there any high-level intuition on what causes the phase transitionary behavior in the non-robust setting and why this is no longer the case in the robust setting?\n\n**Suggestions:**\n1. \"Upper bounds of [Li17] and Theorem 2 are computationally inefficient...” – present the computational complexity in terms of $d, s, \\gamma$ here.\n2. Line 138 of supplementary typo: “note that for in order to” The authors adequately addressed the potential negative societal impact of their work. They also adequately address the limitations of their results.", " The paper studies the sample complexity of robust sparse Gaussian mean testing and linear regression. Motivated by prior work saying that the number of samples required for testing versions of various inference tasks becomes as high as that required for the learning version of the problems (i.e., the sample complexity experiences an information-theoretic gap when corruptions are introduced), the paper asks whether the same is true in the sparse setting. For the robust $s$-sparse Gaussian mean testing problem, it shows a lower bound of $\\Omega(s \\log(d/s))$ for the sample complexity, which means that, in contrast to the non-robust setting, the sample complexity does not default to $\\sqrt{d}$ when $s \\geq \\sqrt{d}$. The result extends to a more general notion of sparsity, for which the paper also obtains upper bounds on the sample complexity. Finally, they show qualitatively similar results for the sparse linear regression setting (in the usual notion of sparsity). The approach for the mean testing result involves defining a family of $\\epsilon$-corrupted Gaussians (in the Huber’s contamination model) with large mean and considering the binary hypothesis testing problem that asks for distinguishing this family from the standard Gaussian (i.e., zero-mean). The proof is based on Le Cam's method and utilizes a lemma of [DKS17] to bound the chi-square distance between the standard Gaussian and the uniform mixture over the aforementioned family. The linear-regression result has to take care of the additional complication regarding some low-probability events that cause the chi-square divergence to blow up. Significance: Testing means of Gaussians and linear regressors are one of the most fundamental problems in statistics and this work completes the characterization of the sample complexity for the robust version.\n\nQuality: The arguments seem to be overall correct. To the extent that I checked the proofs in the supplementary material, I do not have issues regarding soundness.\n\nClarity: The paper is well-written. The sketch in Section 1 is clear in conveying the high-level approach. \n\nOriginality: The techniques for proving the lower bounds are based on Le Cam’s method and [DKS17]. The definition of the hard distribution class is somewhat natural, and the approach does not need to significantly deviate from the prior established technology. The linear regression result however requires more substantial work at the technical level. The upper bound in Theorem 7 follows by modification of standard arguments.\n\nThe paper fills the gap in our understanding regarding the sample complexity for robust sparse testing. Although the problems do not require substantial originality in terms of new techniques, the solution is non-trivial and the results are of interest to the robust statistics community. \n Is it entirely accurate to say that the testing is ``much harder than its non-robust counterpart'', given that the sample complexity agree in the very sparse regime $s < \\sqrt{d}$ ? .", " The submission considers the well-known question of testing whether the mean of a $d$-dimensional identity-covariance Gaussian is zero or a sparse vector $\\gamma$-far from zero, and the related question of testing whether the coefficient vector in an instance of sparse linear regression is zero or far from zero. \n\nIn this review, we will focus on the usual $L_0$ notion of sparsity, though their results for mean testing also extend to any $L_q$-based sparsity for $0 < q < 2$. For mean testing, classically it is known that the optimal sample complexity, up to log factors, scales as $\\min(s,\\sqrt{d})/\\gamma^2$ where $s$ is the sparsity of the alternative hypothesis, and a qualitatively similar picture holds for SLR testing. In particular, there is a phase transition at $s = \\sqrt{d}$ such that above this, testing is easier than estimation and, more specifically, the sample complexity does not grow with $s$ and at worst scales as $\\sqrt{d}/\\gamma^2$.\n\nThe present work considers a twist on this setup where some small constant fraction $\\epsilon$ of the samples that the tester sees have been adversarially corrupted. They show that in this model, the abovementioned phase transition does not occur, and testing is no easier than estimation regardless of $s$. That is, they show that for any constant $\\epsilon,\\gamma$, the sample complexity for *robust* Gaussian mean testing scales linearly in $s$, and similarly for robust SLR testing. Note that this result was already known in the case of $s = d$ [DKS17].\n\nThe proofs are via the Ingster-Suslina method, which amounts to designing a mixture over alternatives and controlling the tails of the pairwise correlation between the distributions over samples under two random alternatives relative to the null hypothesis. In the robust setting, there is the additional freedom of choosing how the adversary corrupts the data: for mean testing, e.g., they consider an adversary (the same as in [DKS17]) that, for every sample, with probability $\\epsilon$ replaces the sample with a draw from another Gaussian such that the mixture of the \"clean Gaussian\" and this Gaussian has zero mean. *Strengths*:\n- It is nice that this paper gives an essentially complete answer to the question they set out to solve, up to log factors and dependence on $\\epsilon,\\gamma$. \n- While the techniques employed are all fairly standard, they are integrated in a clean and modular fashion. \n- Conceptual novelty: while there has been a lot of work on the estimation version of the problems this paper studies, as far as I know this is the first to propose studying robust SLR testing, even for $s = d$.\n\n*Weaknesses*:\n- For the mean testing result, a lot of the technical heavy lifting, namely the pairwise correlation calculations (Lemma 2) and the choice of adversary, comes directly from [DKS17]. Similarly, the upper bound for $L_q$-sparse mean testing is a minor modification of the proof in [Li17].\n\n*Assessment*: I would recommend acceptance because of the comprehensive nature of the result: while dense mean testing was already understood, it is quite cool that they managed to demonstrate that the phase transition for non-robust mean testing disappears altogether. *Questions*:\n- Do the lower bound techniques get the right $\\gamma$ scaling as well? If not, where are the bottlenecks in the argument that preclude this?\n- Is Brennan-Bresler's lower bound for robust SLR also a lower bound against robust SLR testing?\n\n*Minor typos*:\n- P. 3 Line 98: no longer present*s*\n- P. 4 Line 124: \"for some $\\delta$\" -> \"for any $\\delta$\"?\n- P. 4 Line 135: semicolon should be comma, similarly on P. 5 Line 185\n- P. 4 Line 138: we note that in order\n- P. 5 Line 186: *a* suitable event\n The authors have adequately addressed the limitations, and I don't see any potential negative societal impact from this work.", " This paper is interested in determining whether the norm of the mean (supposed to be sparse) of a Gaussian distribution is zero or not based on samples drawn from this distribution. The difficulty of the setting considered comes from the addition of a robustness condition which imposes that a fraction of the data available for the test is corrupted. The authors are interested in determining the minimum number of samples (sample complexity) needed to perform such a task. The authors hope through the study to show the impact of robustness on pre-existing results on the sample complexity of the testing the norm of sparse gaussian vectors. In particular, they show that the robustness constraints significantly increase the complexity of the test. It should be noted that an additional study to test the norm of a regression vector in a sparse linear regression context is also proposed and theoretically analyzed with the same consequences. *Strengths*\n\n1- The paper seems technically sound with original technics to derive the sample complexity.\n\n2- The sample complexity derived makes some sense and provides some intuition into statistical tests under robustness conditions\n\n*Weaknesses*\n\n1- The work needs to be better motivated by giving concrete examples of the usefulness of testing the norm of a high dimensional sparse Gaussian vector.\n\n2- Some experiments would have been welcome. Do some robust statistical tests exist to show empirically what are their sample complexities and how it relates to the derived theoretical sample complexity?\n\n3- The structure of the paper could be improved by providing more opening and conclusions to the work, by insisting more on insights drawn more than the technical proofs which could be deferred to the supplementary materials. Furthermore, a notation section would be welcome to introduce some notation conventions.\n 1- In line 43 , $p_{0}^n$ seems not to be defined\n\n2- Line 108 \"we note that for in\"\n\n3-Line 169 $\\tilde{\\mathcal{D}}$ seems not defined\n\n4- The single MGF seems not defined \n\n5- It would be interesting to have a bunch of experiments to illustrate the theory. In particular, could the authors provide a set of baselines algorithms and their sample complexity and how the proposed sample complexity behaves with respect to this empirical sample complexities?\n\n6- The presentation can be improved by summarizing the main ingredients of the proofs more shortly and focusing more on interpretation, and visualization of the main theoretical results to increase readability, and make the paper more understandable.\n\n7- Do they exist some robustness algorithms that could reach the proposed sample complexity? 1- The paper didn't discuss really the limitations or openings of the works.\n\n2- The motivation of such a study notably for application is lacking to some extent.\n\n3- Experiments are not provided and could be an important asset to support the theory.\n\n4- Some algorithms that reach the sample complexity would be more than welcome." ]
[ -1, -1, -1, -1, -1, -1, 6, 7, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3, 2 ]
[ "WKa9WPPp-bTu", "yUDyie5CpUI", "ozLDMBNOq_G", "JydQ_ggJr3I", "HYlynQ6B8vj", "UIZLfgwUH8R", "nips_2022_UVF3yybAjF", "nips_2022_UVF3yybAjF", "nips_2022_UVF3yybAjF", "nips_2022_UVF3yybAjF" ]
nips_2022_VrJWseIN98
VER: Scaling On-Policy RL Leads to the Emergence of Navigation in Embodied Rearrangement
We present Variable Experience Rollout (VER), a technique for efficiently scaling batched on-policy reinforcement learning in heterogenous environments (where different environments take vastly different times to generate rollouts) to many GPUs residing on, potentially, many machines. VER combines the strengths of and blurs the line between synchronous and asynchronous on-policy RL methods (SyncOnRL and AsyncOnRL, respectively). Specifically, it learns from on-policy experience (like SyncOnRL) and has no synchronization points (like AsyncOnRL) enabling high throughput. We find that VER leads to significant and consistent speed-ups across a broad range of embodied navigation and mobile manipulation tasks in photorealistic 3D simulation environments. Specifically, for PointGoal navigation and ObjectGoal navigation in Habitat 1.0, VER is 60-100% faster (1.6-2x speedup) than DD-PPO, the current state of art for distributed SyncOnRL, with similar sample efficiency. For mobile manipulation tasks (open fridge/cabinet, pick/place objects) in Habitat 2.0 VER is 150% faster (2.5x speedup) on 1 GPU and 170% faster (2.7x speedup) on 8 GPUs than DD-PPO. Compared to SampleFactory (the current state-of-the-art AsyncOnRL), VER matches its speed on 1 GPU, and is 70% faster (1.7x speedup) on 8 GPUs with better sample efficiency. We leverage these speed-ups to train chained skills for GeometricGoal rearrangement tasks in the Home Assistant Benchmark (HAB). We find a surprising emergence of navigation in skills that do not ostensible require any navigation. Specifically, the Pick skill involves a robot picking an object from a table. During training the robot was always spawned close to the table and never needed to navigate. However, we find that if base movement is part of the action space, the robot learns to navigate then pick an object in new environments with 50% success, demonstrating surprisingly high out-of-distribution generalization.
Accept
The paper proposes a novel method that takes the best of both worlds: synchronous and asynchronous on-policy RL methods. The rebuttal nicely addressed the concerns of most reviewers. Why the method makes sense and has benefits is rather straightforward and intuitive (which is a good thing!). The paper is clearly an experimental paper / systems paper with very extensive evaluations that show significant practical benefits. The method potentially has large practical impact, which is an important contribution to the community (and a valid - though less common - NeurIPS paper format). Hence I disagree with reviewer 45yB about requiring a theoretical novelty/contribution. *** not visible to authors as posted after discussion phase *** Agree to raise the score NeurIPS 2022 Conference Paper1458 Reviewer ep2X 18 Aug 2022 Excellent work, impressive results! I will raise my score.
test
[ "dhZLgbShIr", "K8OrmRbR8bh", "TuxPvkPGFWo", "wYNN90p9omZ", "-yCz3vjZQd", "UITutjrQ7o0", "FV1RBE5_Lw", "hsfK2Wcg9Y-", "3RMUAvuqovIK", "Qa1f7TqmMuj", "PymNiM_Kcv9", "QmZIHqh7fcq", "uI69wY6zO5w", "OFJWa5xNZUY", "gfaSuQaGLgc", "jGjAPVli4w", "zaQ5ZEB7hQL", "AScXMIxnP_-" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their response, I will be maintaining my score in favor of acceptance. ", " We hope that we have addressed your concerns. Are you satisfied with our response or do you have additional questions?", " Thank you for your suggestions, we agree that these plots have increased the clarity of our paper.", " Thanks for the reponse. \nThe perfomance gains of VER are much more clear now!\n", " We are glad the misunderstanding around speeds-up has been resolved. Thank you for updating your views and the score! We answer the new questions asked below (which is again a misunderstanding around what is already present in the paper)\n\n> Now it's clear that Table 2 (you wrongly typed Table 3 in your response) includes the main empirical results supporting the speed-up claims. \n\nSorry for the confusion. We used the review version pdf for line numbers, table numbers, etc in our response to indicate that this material was present in the document before the rebuttal. Starting from this response, we will switch to using numbers in the current version to avoid further confusion, but note that unless stated this information was present in the submitted review version.\n\nAlso, please note that Table 2 is not our only empirical result (more details below).\n\n> since it's based on only the open-fridge task since it's quite a challenging one, I am wondering, did you try experiments on other challenging tasks too? \n\nThe results in Table 2 are on the open-fridge task, but those are not the only experiments or results in the paper. \n\nAs we describe in the abstract (L11-L13), introduction (L66-L70), Section 3 (L175-L210), and Table 1 caption, we also conducted large-scale and rigorous evaluations on navigation tasks in Habitat 1.0 (PoingNav (Anderson et al, 2018), and ObjectNav (Batra et al, 2020b)). \n\nAs described in L187-L205, we see consistent and similar throughput improvements. VER is 1.6x faster than DD-PPO when training agents for PointNav and 2x faster for ObjectNav.\n\nObjectNav on the Matterport3D dataset is a challenge for learning systems because the environments have large differences in size and therefore rendering speed (~5x) and ‘resetting’ the environment can be very expensive (on the order of seconds compared to on the order of milliseconds for a step). These are two system challenges that we designed VER to mitigate and its mitigations are effective in this setting.\n\nFinally, for the rebuttal, we have now added Figure A2 that shows PointNav/ObjectNav success vs. time. It shows VER reaches a given success threshold with significantly less wall-clock time. Specifically, to reach the maximum success achieved by DD-PPO (97.4% on PointNav and 13.0% on ObjectNav), VER uses 1.6x less compute on PointNav (saving 16 GPU-days) and 4.3x less compute on ObjectNav (saving 33.4 GPU-days).\n\nWe would like to note two things: First, that the existing benchmarking results in Tab 1 and Tab 2 are the result of 1.3 years of GPU time, so this experimental analysis is rigorous and large-scale. Second, while we only provided detailed system comparisons for one Habitat 2.0 skill, OpenFridge, we have used VER to train all the skill policies. The accuracy of such a skill-chaining system is provided in Section 6.1 (L306-L316), and the high-level system trends are similar.\n\n> Would the authors consider better organizing this paper for readers from general RL background?\n\nHappy to. \n\nOur organization largely follows systems papers published in similar venues: DD-PPO (ICLR ‘20), SampleFactory (ICML ‘21), SEED-RL (ICLR ‘20), and HTS-RL (NeurIPS ‘20) -- describing the system, describing the task, describing system results, and describing task results. \nSince we study two largely distinct task settings and study them in different depths (Habitat 1.0 Navigation vs. Habitat 2.0 Rearrangement), we chose to present these independently as we felt that that made the paper clearer. \n\nDoes the reviewer have specific suggestions for improvement?\n\n> Since I've known how VER works and table 2 supports their claimed speed-up performance gains, I will raise my score a little bit.\n\nWe thank the reviewer for this increase. We hope that with the additional support of Table 1 and the two experiments in the paper that appear to have been missed, the reviewer will increase their score again because we don’t believe our evaluation is limited.\n", " > Could you add these graphs for 'PointNav' and 'ObjectNav' tasks?\n\nHappy to. We have added Figure A2 that shows Success vs. Time for these tasks. The result is consistent with the Habitat 2.0 tasks: VER reaches a given success threshold with significantly less wall-clock time. Specifically, to reach the maximum success achieved by DD-PPO (97.4% on PointNav and 13.0% on ObjectNav), VER uses 1.6x less compute on PointNav (saving 16 GPU-days) and 4.3x less compute on ObjectNav (saving 33.4 GPU-days).\n\n> In my understanding, these seem to be the task where the VER training is beneficial (saving training days).\n\nYes, VER does show significant training wall-clock time saving in Habitat 1.0 tasks (PointNav, ObjectNav; see above for details). However, we should note that these gains are not limited to these tasks. VER also saves _days of training time_ on Habitat 2.0 tasks (pick/place skills) with the full 500 million step budget needed for convergence. The reason why these speeds-ups may not be as stark in Fig 4 is that it uses 20 million steps to make a detailed study feasible — DD-PPO with 1 GPU would take over 1 month of wall-clock time to reach 500 million steps, making that experiment infeasible to run.\n", " Thanks for the authors' kind reply. Yes, I have a misunderstanding of Fig. 4, and now it's clear.\n\n**About empirical results supporting the speed-up claims:**\n\n- Now it's clear that Table 2 (you wrongly typed Table 3 in your response) includes the main empirical results supporting the speed-up claims. The SPS results show clear performance gains.\n- **Additional concern**: since it's based on only the open-fridge task since it's quite a challenging one, I am wondering, did you try experiments on other challenging tasks too? This is important since, for empirical evaluations, comprehensive benchmark results will better support the claimed performance gain. \n\n**About the clarity of VER method**\n\nThanks for the authors' kind explanation about how variable environment steps are achieved. I also read the other reviewer's comments, I think the clarity concern of VER method still applies here.\n\n- Three Reviewers, including 45yB, 68WX, and me rated the presentation of this paper as \"fair\".\n- As a comparison, Dd-ppo has a better presentation of their methods.\n- Would the authors consider better organizing this paper for readers from general RL background?\n\nSince I've known how VER works and table 2 supports their claimed speed-up performance gains, I will raise my score a little bit. \n\nTo the Area chair, please consider my two concerns here when making decisions.\n", " Thanks for the response and for adding the suggested graphs!\nCould you add these graphs for 'PointNav' and 'ObjectNav' tasks? In my understanding, these seem to be the task where the VER training is beneficial (saving training days). In Fig. 4, all tasks take just hours to train, and although VER beats baselines in these environments, it would be helpful to see the 'success-rate vs time' graphs for tasks where VER can save days of GPU training time. ", " > the presentation can be improved by including the following plot (success-rate vs time)\n\nAgreed! \n\nWe note this information (how quickly is X% success achieved as a function of wall-clock time) is implicitly present in the paper because we report throughput (steps per second) and learning curves (success vs steps), but we agree that it is useful to have this explicit plot.\n\nWe have added this plot to Fig 4. \n\n> It is not clear whether VER leads to improvement in training time (ie the time required for the policy to converge to a high success rate).\n\nVER does lead to an improvement in training time. On 8 GPUs, VER has the best sample efficiency (either tied or out-right), Fig 3 and 4, and has the highest throughput, Tab 1 and Tab 3. Given that sample efficiency is at least as good and throughput is increased, training time must decrease.\n\nThis point is even clearer with the new plot of success rate vs time. For a given OpenFridge success rate, VER achieves this in the least amount of wall-clock time (sometimes tied with baselines and sometimes outright).\n\n> Would these skills not emerge when trained with AsyncOnRL on SyncOnRL? Is there something specific to VER that leads to the emergence of these skills? Or any method trained for a sufficient amount of time can lead to the emergence of these skills?\n\nReviewer 5YLN also asked this question, please see our response to them.\n\n> Emergent skills. Although it is an interesting result, it is not very clear how the results support the main idea of the paper. In my opinion, the analysis of these skills should be part of the appendix rather than the main paper.\n\nWe are happy to move this to the appendix if other reviewers agree.\n", " > More details need to be provided to explain several concepts including TP-SRL and the architecture in Line 226.\n\nAs we describe in L220-223, we use TP-SRL as described in Szot et al 2021. This method decomposes GeoRearrange into a series of skills, Navigate, Pick, Place, Open {Fridge, Cabinet}, Close {Fridge, Cabinet}, and chains them together with a task planner (L221-223). The task planner is not learned and operates on privileged information (it knows if an object is inside/needs to be placed inside a container, i.e. the fridge). For a given HAB scenario, the task plan is the same across all instances and this is simply retrieved/used.\n\nWhile this information is all in Szot et al 2021, to make our work more self-contained, will include a section on TP-SRL in the supplement.\n\nWe believe we have fully specified the skill policy architecture in L226-234 (note that the task planner has no architecture). If the reviewer could state what they believe is missing or not adequately explained we are happy to expand.", " > The theoretical novelty and contribution of the paper are not clear. \n\nAs our paper describes, we present a simple technique for scaling on-policy batched on-policy RL systems and conduct rigorous evaluation. As reviewer 5YLN notices: “The focus of the paper is not on algorithmic novelty but rather on a highly performant RL system, which will be quite useful.”; and reviewer 68WX says: “The amount of time required to train the RL algorithms when compared to the fastest SyncOnRL is much much less while maintaining the same sample efficiency. Making these algorithms more accessible to the general audience.” There is no theoretical innovation in our work and our paper never claims any. We believe that a theoretical innovation is not a requirement to be a valuable contribution at NeurIPS.\n \n> Why the proposed technical can theoretically improve both throughput and sample-efficiency?\n\nOur paper does not claim that VER will always have better sample efficiency than SyncOnRL. We empirically find a case (ObjectNav) where it does improve sample efficiency and report that finding, but no general claims are made about sample efficiency. \n\nWe do claim that VER will improve throughput over SyncOnRL for heterogeneous environments (where some environments take significantly longer to run than others). The reasons are explained in the intro and body of the paper, but are fairly straightforward and we are happy to summarize them again. \n\nThe simple reason is the reduction of waiting time. \nThe throughput of both VER and SyncOnRL is described mathematically as AmountOfExperienceCollected/(LearningTime + InferenceTime + SimulationTime + WaitingTime). \n\nCompared to SyncOnRL, VER reduces WaitingTime via its straggler-effect mitigations while all other terms remain the same, thus throughput will improve.\n\nSee our answer below for how VER compares to AsyncOnRL.\n\n> What is the difference between VER and AsyncOnRL? Theoretically/mathematically (not only quantitatively), why does VER perform better than AsyncOnRL?\n\nThere are two key differences between VER and AsyncOnRL that explain why VER performs better.\n\nThe first is shown in Fig 1 -- AsyncOnRL overlaps experience collection with learning while VER does not. This explains why VER is more sample efficient. Due to this overlap, AsyncOnRL must learn with data collected from an older policy (L43-45). This effect is often referred to as policy lag and the data is often referred to as near-policy data. The on-policy objective used to optimize the policy is only well-defined for on-policy data and thus it follows that using near-policy data will reduce the efficiency of this objective. Methods like V-trace attempt to resolve this but they are only approximations. We are unaware of any work that proves that AsyncOnRL has reduced sample efficiency (and doing so is beyond the scope of our work), but this has been observed in prior work, Liu et al 2020, and observed in our work (Fig 4).\n\nThe second difference is how multi-GPU scaling is achieved. VER uses the decentralized distributed method proposed in Wijmans et al 2020. In this method each GPU both collects experience and updates the model (see Sec 2.3 for more details). In AsyncOnRL framework we compare against, multi-GPU scaling is achieved by using additional GPUs for experience collection while learning is still performed on 1 GPU (explained in L291-L301). This difference explains why VER has better throughput on multiple GPUs.\n\nMore formally, the maximum throughput of AsyncOnRL is the maximum number of samples per second the single GPU used for learning can process. This is a constant. As we increase the number of GPUs used for experience collection, we will approach and then reach this, but we cannot exceed it. The multi-GPU throughput of VER is nGPUs * ScalingFactor * VERSingleGPUThroughput.\n\nScalingFactor and VERSingleGPUThroughput are constants, but nGPUs is not (it will have a maximum in practice, but theoretically it can be any non-negative value). Thus there must be a value of nGPUs such that\nnGPUs * ScalingFactor * VERSingleGPUThroughput > MaxAsyncOnRLThroughput\n\n\n> The speed increase of VER on 8 GPU is only 70% faster than 1 GPU. \n\nNo, the reviewer is mistaken. \n\nAs we say on L291, VER is 6.7x faster on 8 GPUs than 1 GPU -- in Tab 3, 2861 (VER Mean, 8 GPUs) / 428 (VER Mean, 1 GPU) = 6.7x. \n\nThe 70% faster result is VER on 8 GPUs vs SampleFactory on 8 GPUs -- L14-L15, in Tab 3 2861 (VER Mean, 8 GPUs) / 1662 (SampleFactory Mean, 8 GPUs) = 1.7x, which is 70% faster.\n", " > Does the emergent navigation skill use described in section 6.2 also happen when using prior methods like DD-PPO, SampleFactory etc, even given more data (up to an order of magnitude more)?\n\nYes, we believe so. The reason is while VER has significantly higher throughput (than DD-PPO and SampleFactory), the underlying core learning algorithm (PPO) is unchanged. However, we agree with the reviewer that the implicit curriculum in VER could give it a unique advantage. \n\nTo empirically test this, we trained a Pick policy with DD-PPO for 500 million steps (the same as VER) and found it also exhibits emergent navigation (with some caveats). To quantify the amount of emergent navigation, we evaluated the Pick policy on NavPick. The Pick policy trained with VER gets 50% Success on NavPick while one trained with DD-PPO gets 42%. While this points to the possibility that VER leads to better emergent navigation skill usage, we note that the DD-PPO trained Pick policy performs ~6% worse pick, indicating that this may be due to worse training convergence. We did our hyper-parameter tuning with VER and it is entirely possible that VER and DD-PPO have slightly different optimal hyper-parameters that explains this difference. Overall, we are unable to definitively resolve this concern and will continue investigating this. \n\n> Does the proposed approach suffer in cases where the difficult environments are harder to simulate? How can this be mitigated?\n\nGreat question. We have thought about this too. \n\nFirst, please note that the environments we studied for navigation do have the property that difficult environments are slower to simulate -- large houses are slower to render -- and we didn’t see a negative impact on training performance here. In fact, we found a small but measurable improvement on ObjectNav in the Matterport3D dataset.\n\nHowever, our intuition is aligned with the reviewer’s -- at some point there must be a negative effect. To test this, we performed a toy experiment where we artificially reduced the simulation speed of all environments except one by ~30x. Thus, nearly-all experience is collected from this one fast environment. As expected, the result is overfitting -- the agent performs well in that one single (fast) environment but does poorly in the vast majority of (slow) environments. The resulting Pick policy achieves 93% success when sampling training environments with the same frequency as training, but only 55% success when sampling the same environments uniformly.\n\nUltimately, this pathological behavior is pointing to the underlying speed vs experience diversity trade-off. We can mitigate overfitting by forcing a minimum amount of experience from each environment. This would come at the cost of reduced throughput.\n\nWe should note that AsyncOnRL is subject to the same trade-off. It too collects more experience from slower to simulate environments. So this trade-off isn’t unique to VER.\n\n> Further analysis on the induced curriculum due to the relationship between simulation speed and difficulty of environments would be interesting. \n\nAgreed! We think this is an interesting direction for future work.Thanks for the suggestion. \n\n> An interesting aspect of the approach is that it naturally induces a curriculum ... and perhaps a design change to specifically create a curriculum between easier / harder environments.\n\nGreat idea! We developed VER for its system benefits and then empirically observed this induced curriculum. One could use VER explicitly for a curriculum and possibly see some system benefits instead. While studying this is beyond the scope of our work, we will note these points as directions for future work. Thanks again. \n\n\n> In order for this work to have an impact on the community the code must be released \n\nAgreed! We think that releasing code is paramount to the success of this work and we will not be waiting for publication acceptance to do so. We have already updated our code-base to maintain parity with habitat-lab/sim main and have begun the process of publicly releasing code.", " > The authors fail to describe how exactly the step length is decided\n\nWe are unsure what the reviewer means by “step length” as this phrase is never used in our paper, but will try to answer the question based on our best interpretation. If the reviewer can clarify what they meant, we are happy to update our answer.\n\nIf by “step length” the reviewer meant the number of steps collected from each environment, this is described in L123-128. The core idea of our algorithm (VER) is that we do not need to explicitly decide how many steps to collect per environment. Instead, this is decided implicitly based on the runtime of an environment. VER collects experience from each environment as quickly as the environments generate them and VER stops collecting experience once T*N total steps have been collected. The number of steps collected per environment is then just how many we collected in that time frame. There is no explicit “step length” parameter. \n\n> Although several tasks are evaluated to test the efficiency of VER, it's hard for the reviewer to determine the claimed performance, for example, \"in Habitat 2.0 VER is 150% faster on 1 GPU and 200% faster on 8 GPUs\". At least VER's performance gain is not obvious in Fig. 4.\n\nVER’s performance gains (speed-ups) are not visible in Fig 4 because that is not what Fig 4 shows. As the Fig 4 caption and Section 5.2 describe, Fig 4 shows sample efficiency results (accuracy vs #steps), not compute speed-ups. \n\nAs described in Section 5.1 L270-281 and the Table 3 caption, the speed-up claims are supported by results in Table 3. Here are the relevant speeds from that table: 428 (VER Mean, 1 GPU) / 174 (DD-PPO Mean, 1 GPU) = 2.5, which is a 150% speed-up; 2861 (VER Mean, 8 GPUs) / 1066 (DD-PPO Mean, 8 GPUs) = 2.7, which was rounded up to a 200% speed-up.\n\nAt the request of Reviewer 68WX have added Figure 4 (Lower) that shows a combined view of throughput and sample efficiency, which may be helpful.\n\nOverall, we hope that our answers have addressed the reviewer’s two stated concerns and that they will increase their rating. \n", " We thank the reviewers for their comments and feedback. We are pleased that our reviewers found the follow:\n\n- That our work “discusses an important problem in the RL community” (ep2x), “can be broadly adopted by the community due to its simplicity, and thus has potential to expedite iteration cycles for all RL researchers, pushing the field forward” (5YLN), “simple and original based on an ingenious observation about the training of the Reinforcement learning algorithm” (68WX), and “introduces an engineering scaling method that aims that takes advantages of SyncOnRL and AsyncOnRL” (45yB). \n- That our experiments are “sufficient” (ep2X), our “experimental details are comprehensive about the system capabilities” (68WX), and our analysis to be “quite thorough and exhaustive” (5YLN). \n- That our emergent navigation result is “an interesting result” (5YLN). \n- That our paper is “clearly written and motivated, and easy to follow” (5YLN), with “clear writing and neat plots” (68WX), and that “the reviewer enjoys reading the author's analysis of problems in AsyncOnRL and SyncOnRL” (ep2X).\n\nWe will address questions and concerns individually below.", " The authors present VER, a method that could both tackle the challenge in synchronous RL that the policy updates need to wait for all steps done in all environments (straggler effect), and the challenge in asynchronous RL that we need to explicitly tackle stale samples when do on-policy learning. Specifically, VER is an environment-dependent variable step size training method that uses variable step numbers according to environments. Evaluation experiments are done sufficiently. Strengths\n---------------------------------\nThis paper discusses an important problem in the RL community, i.e., what techniques should we use in the context that we usually vectorized a batch of environments in simulations to speed up the training time. Overall, the reviewer enjoys reading the author's analysis of problems in AsyncOnRL and SyncOnRL. And the motivation for variable step size rollout makes sense to me. Experiments are sufficient, and efforts are obvious.\n\nWeakness\n---------------------------------\n- (1) The authors fail to describe how exactly the step length is decided. As a comparison, the baseline DD-PPO [1] paper clearly describes the whole training solution.\n - (2) Although several tasks are evaluated to test the efficiency of VER, it's hard for the reviewer to determine the claimed performance, for example, \"in Habitat 2.0 VER is 150% faster on 1 GPU and 200% faster on 8 GPUs\". At least VER's performance gain is not obvious in Fig. 4.\n\nRef:\n-----------------\n- [1] Wijmans E, Kadian A, Morcos A, et al. Dd-ppo: Learning near-perfect pointgoal navigators from 2.5 billion frames[J]. arXiv preprint arXiv:1911.00357, 2019. Please check the 2 points stated above in the weakness section. N/A", " This paper proposes an approach for bridging the gap between asynchronous and synchronous onpolicy RL to leverage the best of both worlds, preserving the throughput of the former and the sample efficiency of the latter. They achieve this by parallely simulating N environments for different number of timesteps, thus avoiding being bottlenecked by the slowest environment. The authors evaluate their approach on habitat environments (including the more complex rearrange tasks from habitat 2.0), and empirically validate their claims. \n Significance \n\n-> Performance in terms of sample efficiency and training speed are critical for RL agents. This paper provides an intuitively simple approach (i.e having a budget of samples for all environments each time before the policy update, as opposed to a fixed budget for each one of the environments) that improves on both metrics. This idea can be broadly adopted by the community due to its simplicity, and thus has potential to expedite iteration cycles for all RL researchers, pushing the field forward. Further the analysis performed by the authors comparing the sample efficiency and throughput to that of prior methods (pure async or pure sync) is quite thorough and exhaustive (including a comparison to a version of their method with all micro-optimizations, except the main idea - i.e having a fixed budget for all environments). \n\n-> The authors include an interesting result that with access to skills not directly required for a task, the agent is able to leverage these to perform the task better given more data (Specifically, given access to navigation skills, the agent is able to perform pickplace better, even though navigation isn’t necessary for this task). It would be interesting to see if this emergent behavior can also be seen by running previous approaches (eg DD-PPO), even given more data. If prior methods require a lot more data to discover similar interesting emergent behavior, this would make the argument for adopting the proposed approach even stronger.\n\n-> An interesting aspect of the approach is that it naturally induces a curriculum. Since easier environments are faster to simulate, samples from these are seen more often earlier on, and this helps learning. The authors mention this in passing, but I think there can be further analysis here, and perhaps a design change to specifically create a curriculum between easier / harder environments (difficulty might sometimes not vary exactly with simulation speed, and I wonder if there are some trade-offs here that can be analyzed). \n\n\nQuality \n-> The empirical evaluation in terms of throughput and sample efficiency are the most critical components in establishing the author’s claims, and this is done quite exhaustively, even taking account variability in the number of GPUs available for training. Evaluation is included on simpler established navigation tasks from habitat1, and also on more complex longer horizon rearrangement tasks from habitat2 (which include multiple pick-place steps). The grasp action in these environments requires contact with the object and the robot to issue a command, as opposed to the ‘magic grasp’ commonly used in habitat environments (where the object is automatically grasped when close enough to the arm). This makes the control problem significantly harder, and brings the approach closer to real world settings (though that gap is still quite large, but addressing that is outside the scope of this paper). \n\n\nOriginality \n\n-> The idea proposed is quite simple. The focus of the paper is not on algorithmic novelty but rather on a highly performant RL system, which will be quite useful. The related work adequately covers related work in the field, including relevant asynchronous and synchronous approaches. \n\n\nClarity \n\n-> The paper is clearly written and motivated, and easy to follow. In order for this work to have an impact on the community the code must be released (since the focus of this work is a better RL system, and there are some micro-optimizations included that might be critical). The code is included in the supplementary, so I think the authors intend to release it. \n\n Questions - \n\n-> Does the emergent navigation skill use described in section 6.2 also happen when using prior methods like DD-PPO, SampleFactory etc, even given more data (up to an order of magnitude more)? \n\n-> Further analysis on the induced curriculum due to the relationship between simulation speed and difficulty of environments would be interesting. Does the proposed approach suffer in cases where the difficult environments are harder to simulate? How can this be mitigated?\n Limitations are adequately discussed", " This paper proposes a scaling technique for reinforcement learning (RL) to speed up sampling and learning in on-policy RL methods. The paper claims to blend the benefits of SyncOnRL and AsyncOnRL to increase both throughput and sample-efficiency. The main approach is to enable multiple environments to collect a variable number of steps simultaneously without synchronization. The paper performs experiments on several simulation scenarios and analyzes the performance of several baselines. + The paper tries to address an important problem in RL to speed up sampling and learning.\n\n+ The paper introduces an engineering scaling method that aims that takes advantages of SyncOnRL and AsyncOnRL. + The theoretical novelty and contribution of the paper are not clear. Why the proposed technical can theoretically improve both throughput and sample-efficiency?\n\n+ The speed increase of VER on 8 GPU is only 70% faster than 1 GPU. Is the problem big enough to use the power of 8 GPU? Or is the algorithm for the 8 GPUs well parallelized? If 8 GPUs are not fully utilized, the motivation of using the proposed method on multiple GPUs seems not essential.\n\n+ What is the difference between VER and AsyncOnRL? Theoretically/mathematically (not only quantitatively), why does VER perform better than AsyncOnRL?\n\n+ The organization of the paper needs improvement. Experiment results are scattered throughout the paper, from introduction to the four sections of experiments. Some experiment setups also appear in the approach section. This increases the difficulty of understanding the theoretical novelty of the paper.\n\n+ More details need to be provided to explain several concepts including TP-SRL and the architecture in Line 226.\n No potential negative societal impact is perceived in the paper. ", " The paper focuses on the problems pertaining to SyncOnRL and AsyncOnRL and proposes a solution that combines the advantages of these methods. SyncOnRL suffers from the straggler effect, where the slowest worker decides the overall speed of execution, and the system capability is not utilized properly. AsyncOnRL solves this by taking action in an environment as soon as it is ready. Although this leads to efficient system utilization, it leads to reduced sample efficiency. \nThe proposed method, Variable Experience Rollout aims to mitigate these issues. The paper points out that collecting an equal number of time steps T per environment is easier to implement but is not a necessary requirement for reinforcement learning. VER collects a variable number of steps across the different environments. This overcomes staggler effect while at the same time is more sample efficient than AysnOnRL. Strengths : \n* The paper works on an exciting and vital problem of accelerating on-policy RL algorithms while maintaining sample efficiency. \n* The idea is simple and original based on an ingenious observation about the training of the Reinforcement learning algorithm. It is a relatively novel idea. \n* The amount of time required to train the RL algorithms when compared to the fastest SyncOnRL is much much less while maintaining the same sample efficiency. Making these algorithms more accessible to the general audience. \n* The experimental details are comprehensive about the system capabilities, which is essential, especially for a paper that is mainly about utilizing system capabilities. \n* Clear writing and neat plots. \n\nWeaknesses\n* Although the idea is exciting and novel, the presentation can be improved by including the following plot\n * it would be constructive to see one plot where : \n * x-axis: Policy training time (in hours or any other suitable time unit) \n * y-axis: Success Rate \n * Baselines: Fastest AsyncOnRL, Fastest SyncOnRL, VER\n * It is not clear whether VER leads to improvement in training time (ie the time required for the policy to converge to a high success rate). \n\n* Emergent skills \n * Although it is an interesting result, it is not very clear how the results support the main idea of the paper. In my opinion, the analysis of these skills should be part of the appendix rather than the main paper. \n \n\n\n Regarding Emergent Skills\n* Would these skills not emerge when trained with AsyncOnRL on SyncOnRL?\n* Is there something specific to VER that leads to the emergence of these skills? Or any method trained for a sufficient amount of time can lead to the emergence of these skills?\n Limitations are not discussed properly. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 2 ]
[ "QmZIHqh7fcq", "zaQ5ZEB7hQL", "wYNN90p9omZ", "UITutjrQ7o0", "FV1RBE5_Lw", "hsfK2Wcg9Y-", "uI69wY6zO5w", "3RMUAvuqovIK", "AScXMIxnP_-", "zaQ5ZEB7hQL", "zaQ5ZEB7hQL", "jGjAPVli4w", "gfaSuQaGLgc", "nips_2022_VrJWseIN98", "nips_2022_VrJWseIN98", "nips_2022_VrJWseIN98", "nips_2022_VrJWseIN98", "nips_2022_VrJWseIN98" ]
nips_2022_KFxIsdIvUj
Identifiability and generalizability from multiple experts in Inverse Reinforcement Learning
While Reinforcement Learning (RL) aims to train an agent from a reward function in a given environment, Inverse Reinforcement Learning (IRL) seeks to recover the reward function from observing an expert's behavior. It is well known that, in general, various reward functions can lead to the same optimal policy, and hence, IRL is ill-defined. However, \cite{cao2021identifiability} showed that, if we observe two or more experts with different discount factors or acting in different environments, the reward function can under certain conditions be identified up to a constant. This work starts by showing an equivalent identifiability statement from multiple experts in tabular MDPs based on a rank condition, which is easily verifiable and is shown to be also necessary. We then extend our result to various different scenarios, i.e., we characterize reward identifiability in the case where the reward function can be represented as a linear combination of given features, making it more interpretable, or when we have access to approximate transition matrices. Even when the reward is not identifiable, we provide conditions characterizing when data on multiple experts in a given environment allows to generalize and train an optimal agent in a new environment. Our theoretical results on reward identifiability and generalizability are validated in various numerical experiments.
Accept
The paper provides an investigation of conditions for recovering the reward function up to a constant from multiple experts. While the assumption that observations from multiple (entropy regularized experts) acting in different environments is quite strong, the authors did a good job in justifying and further explaining the setting in the rebuttal. While the paper is incremental, I agree with the reviewers that the paper is solid and interesting.
train
[ "gi3vJUWlen-", "Wd8oOszDwpO", "Ia11C9GEdiY", "52_PoFdyO9p", "kyxOVsjhjV7", "0OXvquvYOU-B", "bcr83JSDalV", "O8pTeEFG92H", "IQPf77_JIjc", "RXjb9h8Bpf2", "5MYYIhaPAar", "ufOYqJGTmjk", "ct-zqnn3erWV", "l2-LOy1Jyky", "_NeFRkdx5mG", "YHpoQlFngUH", "1h5TMpLY4S", "eCA2SPjLsZ", "ECJ9GecyE8u", "qW9xNVZ5cpN" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nas the discussion period ends soon we were wondering if you have any other question on the differences between our work and (Ng et al. 1999). If that is the case, we are happy to have further discussion.\n\nBest,\nAuthors ", " Thank you for the positive feedback. We will add this conclusion in the final version.\n\nBest,\nAuthors ", " Thank you for this draft of the proposed conclusion, I think it greatly helps in properly positioning this paper's contributions, as well as providing readers with a clearer takeaway.", " Thank you for your question.\n\nWe do not think there is a contradiction between our statement and Theorem 1 in Ng et al. 1999.\n\nIndeed, using Ng et al., 1999 notation, their Theorem 1 states that an optimal policy in the MDP $ M = (S,A,T, \\gamma, R + F)$ is optimal also in $ M^\\prime = (S,A,T, \\gamma, R)$. Notice that this statement holds because both MDPs $M$ and $M^\\prime$ share the same transition dynamics $T$.\n\nOur setting is different because we consider that the two environment may have different transition dynamics, i.e. $ M = (S,A,T, \\gamma, R + F)$ and $ M^\\prime = (S,A,T^\\prime, \\gamma, R )$. In this case, Theorem 1 in Ng et al. 1999 does not apply, that is, an optimal policy in $M$ is not necessarly optimal in $M^\\prime$.", " Thank you for your response.\n\nWe have just posted a conclusion draft in the common thread. We hope this will help to address the significance concerns. Moreover, we welcome any further feedback from your side.", " Dear reviewer,\n\nThank you for your constructive feedback. In the final version, we would use the additional page to add the conclusion we propose below. The goal is to summarize the contribution and state the limitations of the setting mainly, the entropy regularized expert. We welcome any suggestion from your side about the following conclusion draft.\n\nBest, Authors\n\n**Proposed Conclusion**\n\nIn this paper, we analyze conditions that guarantee identifiability of the reward function (up to an additive constant) from multiple observed experts maximizing the same reward and facing different transition dynamics. This hence allows us to train optimal policies in any other environment sharing the same reward with the environments of the observed experts. On the other hand, in order to only generalize to a given environment, such strong reward identification is not required, and we provide a milder necessary and sufficient condition for it. We also provide identifiability results in a variety of settings, i.e., linearly parameterized reward, approximated transition matrices, observation of any number of experts, as well as a non-identifiability result in the presence of exogeneous variables.\n\nThe main limitations of this work are the following:\n\n- **Observing experts in different environments:** We saw that observing a single expert in one environment cannot lead to reward identification in our setting. We hence need to observe at least two experts acting in different enough environments. To motivate this assumption, note that varying environments are ubiquitous in RL, in particular in Robust RL which deals with the training of experts that perform well in different environments, where the transition dynamics can vary to some extent. It is hence rather common to consider that the transition dynamics of a given environment can change. This was studied, e.g., in [1,2,3], where the authors considered different Mujoco environments with varying friction coefficients, or object masses, which influence the dynamics. Also, instead of observing different experts in different environments, we could imagine that we observe a single expert in a single environment that varies over time (but with fixed reward), and that the expert adapts to these changes. Such observations would provide us optimal actions in environments with different transition dynamics, and thus our results would apply. This is of particular interest in economics/finance where the environment is in constant evolution.\n\n- **Assuming entropy regularized experts:** When observing real world data, we have to face the fact that humans do not follow this idealized mathematical model. However, it turns out that our results still hold for the more general class of regularized MDPs [4] where we replace the entropy with any strongly convex function. Whether the flexibility in the choice of the regularizer allows to better capture real-world behaviors is an open question.\n\n\n[1] Robust reinforcement learning via adversarial training with langevin dynamics, Kamalaruban et al. 2020\n\n[2] Robust Adversarial Reinforcement Learning, Pinto et al. 2017\n\n[3] Action Robust Reinforcement Learning and Applications in Continuous Control, Tessler et al. 2019\n\n[4] A theory of regularized Markov Decision Process Geist et al. 2018\n", " > The main implication of this observation is that knowing the reward function up to potential shaping in a given environment does not allow to train an optimal policy in a different environment.\n\nDoesn't this claim directly contradict Theorem 1 of [Ng et al (1999)](https://people.eecs.berkeley.edu/~pabbeel/cs287-fa09/readings/NgHaradaRussell-shaping-ICML1999.pdf) which shows that the optimal policy is identical in reward functions related up to potential shaping for any MDP?\n\nI think you get an even broader class of invariances if only targeting a specific environment.", " Let us clarify the connection between our work and potential shaping. Potential shaping describes the degrees of freedom in the reward that leave the optimal policy (or ranking of the policies) invariant within a certain environment. The existence of such degrees of freedom actually implies the non-identifiability of the reward function from a single expert, and is fully described in Theorem 1.\n\nHowever, while there always exists a certain flexibility of the reward in any given environment, the space of rewards that leaves the optimal policy invariant varies in different environments. This can again be seen from Theorem 1 where we observe that the degree of freedom in $r$, characterized by the choice of value function $v$, depends on the transition matrices $T$.\n\nThe main implication of this observation is that knowing the reward function up to potential shaping in a given environment does **not** allow to train an optimal policy in a different environment.\n\nIn order to generalize to **any other environment**, identification of the reward up to an additive constant is **necessary**. On the other hand, if one wants to only generalize to a given environment, such strong reward identification is not required, and Theorem 11 provides a milder necessary and sufficient condition for it. This necessity is made clear in the Theorem statement: “if it does not hold, then there exists a reward function compatible with experts 1 and 2 which leads to a sub-optimal policy in environment 3”.", " > Given the Remark 3 in Cao et al., 2021 we know that the set of rewards that leaves the optimal policy unchanged is completely characterized by the potential shaping transformation presented in Ng et al., 1999. At this point, the two terminologies may be used interchangeably. Please let us know if we are missing something and we should use one terminology rather than the other.\n\nReward shaping is frequently used to refer to heuristic terms added to aid policy exploration. These may change the optimal policy, though are often annealed to zero as training progresses. https://link.springer.com/referenceworkentry/10.1007/978-0-387-30164-8_731 gives several examples of this more general usage.\n\n", " Thank you for your response to my review, as well as the common response. They have mostly addressed my concerns. I look forward to reading an updated version of the paper with an added conclusion, as this will hopefully help address the concerns around significance.", " Thank you for your responses to my queries - this all makes sense. In particular, your example to explain the matrix communtation condition is really insightful and I would encourage you to include this in the paper or appendix! Great work with this paper - I enjoyed reading it :)", " **“In Theorem 7, does the real reward need to belong to the same parametrized space of the recovered reward? Please clarify this in the text”:**\n\nYes, the true reward should be in the considered set of restricted rewards. This is indeed not clear from the current theorem statement, we will modify it accordingly.\n\n**About the reward recovery algorithm:**\n\nThe ways the rewards are recovered in the experiments are described in Algorithms 1, 2 and 3 in Appendix B. The difficulty in the current setup is not to find a compatible reward with the observed experts (this can simply be done by finding a solution to a linear system), but rather to know whether the recovered reward matches the ‘true’ reward up to an additive constant.", " **About observing experts in the same environment w.r.t. different reward functions:**\n\nIdentifying independent rewards of two experts, even acting in the same environment, is a separable problem, in the sense that one expert will not provide any information on the reward of the other expert. Hence, the problem would remain ill-defined.\n\nThe second expert should provide additional information about the reward of the first expert. It seems that assuming the same reward for the second expert but different environment indeed provides new information. It is an interesting question whether we can relax a bit the ‘equal rewards’ assumption, and only account for ‘similar rewards’. This is in some way already the case in this work since we assume possibly different discount factors. We believe that our results provide useful tools to extend IRL to other setups.\n\n**About the hardness of checking the condition in Cao et al.:**\n\nThe value distinguishing assumption of Cao et al. states that there exist no pair of non-constant vectors $(w, w’)$ such that a certain equality holds, which includes $w,w’$ and the transition matrices of the two environments (equation (7) in Cao et al.). Naively checking this assumption would hence require testing this statement for any pair $(w,w’)$ of non-constant vectors, which is computationally impossible. In contrast, our restatement allows us to check the identifiability condition in finite time.\n\n**\"is an eigenvector of A with eigenvalue 0, and corresponds to the invariance of the optimal policy under addition of a constant to the reward function.\" is not clear:**\n\nThis comment is meant to explain where the rank deficiency comes from. We see that the eigenvector of A with eigenvalue 0 is formed of two constant vectors associated with each value vector of both experts. This means that adding a constant to the value vector of both experts would still provide a solution to the linear system (14). Moreover, looking at the reward decomposition in terms of value function (below line 555), we see that adding a constant to the value vector also yields the addition of a constant to the reward.\n\n**“The statement in line 586 in Appendix (\"This is hence equivalent...\") is not clear at all:**\n\nFirst, note that the vector pair $(v^1, v^2)$ on the right of the equation below line 585 is always a solution of the linear system on the left. Hence, we can trivially replace the implication sign ($\\Rightarrow$) in this line with an equivalence sign ($\\Leftrightarrow$). Therefore, the condition of Definition 2 precisely describes the solutions of the linear system on the left, i.e., the kernel of the matrix written below line 586. The condition states that this kernel must be formed by vector pairs of the form $(v^1, v^2) = (c\\mathbf{1}, c(1-\\gamma_1)/(1-\\gamma_2) \\mathbf{1})$ for $c \\in \\mathbb{R}$, which is a vector space of dimension 1.\n\n**“In line 313, is β essentially just the standard γ discount factor?”:**\n\nYes, β is the standard discount factor. We will replace the notation.\n\n**Significance**\n\nPlease refer to the common response.\n\nWe thank the reviewer for the suggestions on improving the clarity. These will be taken into account in the final version.", " **About generalizing to any number of experts**\n\nThe generalization to any given number of experts involves the matrix $A_n$ given in equation (45) (the ‘-’ signs can be removed since we will only compute the rank of the matrix), and the identifiability condition reads $\\text{rank}(A_n) = n|S| - 1$. We only wrote and proved the results for the case of 3 experts since it was particularly relevant for stating the generalizability results. The proof for n experts follows the same line, but we will add it in the appendix of the final version.\n\n**Line 28 - 'highly parameterized and represented by a low-dimensional set of parameters' is a little unclear - are you alluding to Deep Neural Network reward representations here?**\n\nIt is a bit the other way around. We mean that in some applications, the reward is parameterized only using a small set of parameters. In the economics/finance literature, the reward function is often represented by simple concave utility functions, such as quadratic, exponential, or power functions known as CARA, CRRA, or HARA utility. In addition, the reward function is often assumed to have a single argument, e.g., consumption. For instance, the classic Mehra-Prescott equity premium puzzle [1] has been established using the constant relative risk aversion utility, $r(a)=(a^{1-\\alpha}-1)/(1-\\alpha)$, and the single parameter estimated in applied work (e.g., [2]) is the relative risk aversion coefficient \\alpha.\n\nWhile only considering a small family of possible rewards leads to better identification, this may restrict its range of validity, and in particular its generalization to tasks with different transition dynamics. In this work, we consider all possible state/action dependent reward functions.\n\nWe will rephrase this part of the introduction.\n\n[1] Rajnish Mehra, Edward C. Prescott: The Equity Premium: A Puzzle. In: Journal of Monetary Economics. Nr. 15, 1985, S. 145–161\n\n[2] Hansen, Lars Peter, and Kenneth J. Singleton. 1982. “Generalized Instrumental Variables Estimation of Nonlinear Rational Expectations Models.” Econometrica 50 (5): 1269–86.\n\n**About the hardness of checking the condition in Cao et al.:**\n\nThe value distinguishing assumption in Cao et al. states that there exist no pair of non-constant vectors $(w, w’)$ such that a certain equality holds, which includes $w,w’$ and the transition matrices of the two environments (equation (7) in Cao et al.). Naively checking this assumption would hence require testing this statement for any pair $(w,w’)$ of non-constant vectors, which is computationally impossible. In contrast, our restatement allows to check the identifiability condition in finite time.\n\n**About the matrices commutation condition:**\n\n$T_{a_0}$ commutes with $T_a$ means that, in the MDP, taking first action $a_0$ and then action $a$ leads the agent to the same distribution over state as if it had taken $a$ first and then $a_0$. For deterministic dynamics this means that the final state does not depend on the order of the actions (although the obtained reward can still be different!). In gridworld, in general, going $up$ and then $right$ leads to the same state as going $right$ and then $up$ (they both lead to the state on the upper right of the current state). This property only fails when the agent is near the edge of the grid. For example, if there is a wall on the right, taking $left$ and then $right$ leads the agent to the same original state, but taking $right$ and then $left$ leads it to one state on the left, since the first $right$ action was blocked by the wall.\n\nHowever, it seems that these small contradictions to the commutation assumption did not affect identifiability in the case of gridworld, as shown in the experiments.\n\n**Line 115 - Do you mean '[additive] constant'?**\n\nYes.\n\nWe thank the reviewer for the suggestions on improving the clarity. These will be taken into account in the final version.", " **About the confusion between \"reward shaping\" and \"potential shaping\":**\n\nGiven the Remark 3 in Cao et al., 2021 we know that the set of rewards that leaves the optimal policy unchanged is completely characterized by the potential shaping transformation presented in Ng et al., 1999. At this point, the two terminologies may be used interchangeably. Please let us know if we are missing something and we should use one terminology rather than the other.\n\n**About the application of IRL to finance:**\n\nThe economics/finance literature differentiates between axiomatic and revealed preference theory. In axiomatic preference theory, the utility/reward function is posited or derived from basic axioms. Here, indeed, there is no difficulty in specifying a reward function, such as PnL minus lambda*CVAR. In empirical and experimental work, however, simple reward function specifications are often rejected and consumers/investors have been shown to exhibit behavioral biases and/or non-standard preferences.\n\nThis work relates more to the vast literature on revealed preference, which we should have highlighted more and differentiated better. Revealed preference theory, initiated by [1, 2], provides an approach to analyze actions (e.g., consumer’s demand behavior) by assuming that observed choices provide information about the underlying preferences, or reward function. Revealed preference theory is, hence, similar in spirit to IRL. But IRL has not (yet) been used in revealed preference analysis. [3, 4] provide excellent reviews of recent advances in revealed preference theory.\n\nThe goal of revealed preference theory is to recover the agents’ preferences. This task is important because knowledge of the reward function is required to conduct counterfactual policy analysis. Knowing the policy function is not enough. In financial applications, for instance, the impact of a Tobin tax can be assessed only knowing investors’ preferences for trading (see, e.g., [5]).\n\n\n[1] Samuelson, P.A. (1938) “A Note on the Pure Theory of Consumer’s Behavior,” Economica, 5: 61-71.\n\n[2] Samuelson, P.A. (1948) “Consumption Theory in Terms of Revealed Preference,” Economica, 15: 243-253.\n\n[3] Demuynck, T., Hjertstrand, P. (2019). Samuelson’s Approach to Revealed Preference Theory: Some Recent Advances. In: Cord, R., Anderson, R., Barnett, W. (eds) Paul Samuelson. Remaking Economics: Eminent Post-War Economists. Palgrave Macmillan, London.\n\n[4] Echenique, F., (2019) “New developments in revealed preference theory: decisions under risk, uncertainty, and intertemporal choice”, arXiv e-prints.\n\n[5] Tobin, J., (1978), “A Proposal for International Monetary Reform,” Eastern Economic Journal, Vol. 4 (July-October), pp. 153–59.\n\n\n**About motivations for reward identification and the use of entropy regularized experts:**\n\nSee the common response.\n\nWe thank the reviewer for the spotted typos which will be fixed.", " We thank the reviewers for their valuable and globally positive comments.\n\nThe main concern, raised by two of the reviewers (z9k4 and Pbuu), is the significance of the studied setup, i.e., reward identification from observing entropy regularized experts acting in different environments. More precisely, the three raises concerns were: The purpose of reward identification up to a constant rather than up to potential shaping (Reviewer z9k4), the plausibility of observing agents acting in different environments with respect to the same reward function (Reviewer Pbuu), and the soundness of considering entropy regularized experts as opposed to other types of optimal experts (Reviewer z9k4). We hence provide additional motivations for these 3 aspects.\n\n**Reward identification:**\n\nAs mentioned in the introduction in [4],\n“in many applications it is not enough to find some pattern of rewards corresponding to observed policies; instead we may need to identify the specific rewards agents face, as it is only with this information that we can make valid predictions for their actions in a changed environment. In other words, we do not simply wish to learn a reward which allows us to imitate agents in the current environment, but which allows us to predict their actions in other settings.”\n\nIn some cases, we may however not need to recover the exact reward in order to generalize to another given environment. This is why we also consider the more restricted goal of generalization instead of reward identification.\n\n**Observing experts from different environments:**\n\nVarying environments are ubiquitous in RL, in particular in Robust RL which deals with the training of experts that perform well in different environments, where the transition dynamics can vary to some extent. It is hence rather common to consider that the transition dynamics of a given environment can change. This was studied, e.g., in [1,2,3], where the authors considered different Mujoco environments with varying friction coefficients, or object masses, which influence the dynamics.\n\nAlso, instead of observing different experts in different environments, we could imagine that we observe a single expert in a single environment that varies over time (but with fixed reward), and that the expert adapts to these changes. Such observations would provide us optimal actions in environments with different transition dynamics, and thus our results would apply. This is of particular interest in economics/finance where the environment is in constant evolution.\n\n**Entropy regularized experts:**\n\nIt turns out that our identifiability result is valid more generally for regularized MDPs [5] where the entropy term in equation (1) is replaced by any other strongly convex differentiable function of the policy $\\Omega(\\pi)$.\n\nIndeed, we can use Proposition 1 and Definition 1 in [5] to establish that for any value vector $v$ and reward $f$, there exists a unique policy that satisfies $$\\pi(a|s) = \\nabla \\Omega^\\ast( f(s,a) + \\gamma \\sum_{s^\\prime}P(s^\\prime|s,a)v(s^\\prime)).$$\n\nwhere $^\\ast$ denotes the Fenchel conjugate. By the distributivity property (iii) in Proposition 1 of [5], we can subtract a function dependent only on state in the argument without affecting the equality. This gives that for any $v$ and $f$, there exists a unique $\\pi$ such that \n$$\\pi(a|s) = \\nabla \\Omega^\\ast( f(s,a) + \\gamma \\sum_{s^\\prime}P(s^\\prime|s,a)v(s^\\prime) - v(s))$$\n\nUsing the convexity of $\\Omega$, we have that $\\nabla \\Omega$ is the inverse map of $\\nabla \\Omega^\\ast$. Hence we obtain\n$$\\nabla \\Omega(\\pi(a|s)) = f(s,a) + \\gamma \\sum_{s^\\prime}P(s^\\prime|s,a)v(s^\\prime) - v(s)$$\nwhich is the equivalent of our Theorem 1 for general strongly convex regularizers. The only part changing is the left hand side. However, we saw in the analysis that reward identifiability was not depending on this part of the equation. When using a different regularizer, the recovered reward given observed expert policies will be different, but the identifiability condition remains the same.\n\nHowever, epsilon-greedy or deterministic greedy policies would not fit this setting. Identifiability is more challenging with these kinds of experts because the knowledge of such policies only informs us with the action yielding the highest expected value, but no information about the relative difference with respect to other actions, in contrast with regularized stochastic policies.\n\n\nWe will follow the reviewers' suggestion of adding a conclusion, and will discuss the limitations of this work as done above.\n\n[1] Robust reinforcement learning via adversarial training with langevin dynamics, Kamalaruban et al. 2020\n\n[2] Robust Adversarial Reinforcement Learning, Pinto et al. 2017\n\n[3] Action Robust Reinforcement Learning and Applications in Continuous Control, Tessler et al. 2019\n\n[4] Identifiability in inverse reinforcement learning, Cao et al. 2021\n\n[5] A theory of regularized Markov Decision Process Geist et al. 2018\n\n", " Inverse Reinforcement Learning (IRL) seeks to infer a reward function from demonstrations sampled from an agent acting (approximately) optimally. Unfortunately, the IRL problem is under-determined: an infinite number of reward functions will lead to exactly the same optimal policy. However, it is possible to obtain additional information when observing demonstrations from multiple experts, acting in environments that differ in transition dynamics or discount factor (but which have the same state and action space, and reward function).\n\nThis paper characterizes when this multi-expert IRL problem is or is not identifiable. The characterizations are in terms of the rank of matrices formed from the transition matrices of the different environments. In particular, Theorem 3 provides a charcterization in the two-expert, distinct dynamics case that is equivalent to that of Cao et al [1] but which is more amenable to direct validation. They derive corollaries for the three-expert case (Corollary 4) and two different discount rate (Corollary 5). There is also a key negative result (Corollary 6), showing that when there are exogeneous variables then the reward function can only be determined up to an additive function of the exogeneous variable (not a constant). They also derive stronger results in the case of a linear reward function (Theorem 7), and provide a condition suitable for estimated transition matrices (Theorem 8).\n\nThe paper concludes with Theorem 11 providing a condition for a reward function to generalize. This and other theorems are then illustrated in a brief empirical section.\n\n Some of these transformations -- such as potential shaping, or rescaling by a positive constant -- we might Originality: moderate. The key result, Theorem 3, is equivalent to an earlier result due to Cao et al. Most of the other results are relatively simple applications of this theorem. However, Theorem 3 is a non-trivial restatement, and the corollaries although not technically deep are a useful set of results.\n\nQuality: all the claims seem plausible although I have not had time to validate the proofs in detail.\n\nClarity: comprehensible but significant room for improvement. The actual technical description was fairly clear. But there is little to signpost the reader in the introduction, and I even found the introduction misleading in places, e.g. you refer to \"reward shaping\" but later it is clear you are talking specifically about *potential shaping* (but for some reason do not cite the Ng et al, 1999 paper until later).\n\nThe motivation in the introduction focusing on economics & finance also felt rather odd. With the benefit of hindsight, I suppose this is meant to foreshadow the empirical results, and perhaps rewards linear in features are also more plausible in this setting. But in general, I do not think of this as being a major application of IRL. Indeed, it seems relatively easy to hand-design a passable reward function for many financial applications, e.g. PnL minus lambda*CVAR. By contrast, it seems far more challenging to work in settings that do not have directly measurable metrics of success, such as \"help the user build a house in Minecraft\" (BASALT benchmark, etc).\n\nMoreover, citations [5]-[8] seem only tangentially related (prospect theory, VnM utility theorem, etc). At best they establish the \"fundamental importance\" of reward functions. But not any difficulty in specifying them.\n\nPaper would also benefit from having a conclusion section, and limitations and future work.\n\nSignificance: moderate. The matrix rank conditions are more amenable to empirical validation than prior work -- especially given the approximation condition of Theorem 8. The results for generalization are also a useful application, and notably can give guarantees of generalization even when the reward is not perfectly identifiable (depending on the environment to be transferred to).\n\nMinor typos/grammar points I'd suggest fixing:\n - \"However, [1] showed that\" -- [1] is not a proper noun, should read \"However, Cao et al [1] showed that\". (This misuse occurs many times later on in related work, as well.)\n - \"This work starts by\" -- is this work [1], or your paper?\n - \"maximized by the agent\" -- agent is ambiguous, could refer to the policy you learn, call it demonstrator instead?\n - Line 94: R should be r (lowercase) for consistency with line 91 and 97.\n - References: some words in titles need to be capitalized, plae inside {...} in the TeX, e.g. monte carlo->Monte Carlo.\n\n# Update after rebuttal\n\nThanks to the author for their detailed response. I understand the Econ motivation better now, IRL is certainly relevant for understanding human behaviour in financial transactions. I still find spotlighting this application a bit strange given that as the authors say themselves \"IRL has not (yet) been used in revealed preference analysis\". It'd be interesting to see work applying IRL to human financial decisions and see if it's more predictive than existing Econ models --- but this paper does not do that, so why focus on it?\n\nI'm still not sure in what setting we'd care about reward functions up to a constant rather than up to potential shaping as we know potential shaping does not change the ranking of policies. The author response doesn't really address this point. - Most of the results in the paper describe whether or not IRL can characterize the reward function up to a constant. But why should we care about this? Isn't it sufficient to identify it up to potential shaping, since that never changes the optimal policy or ranking of policies?\n - How might these results change if expert is not MaxEnt RL? E.g. no entropy bonus (hard optimal policy), or a different kind of suboptimality (epsilon-greedy)? I did not see any substantive discussion of limitations of this work. One obvious limitation is it assumes a particular model of the demonstrator (MaxEnt RL). Humans, unfortunately, do not follow this idealized mathematical model. This is a limitation common with much work in this area -- but it'd be nice to at least see some discussion of how the results might vary with different demonstrator models, to get a sense of how robust these results are.\n\nNo discussion of ethical implications but I also don't see any significant negative ethical implications from this work.", " This IRL theory paper presents a new statement of an identifiability result (first shown in [1]) for recovering a reward function given observations of multiple experts with different discount factors and/or acting under different transition dynamics in tabular MDPs. This paper improves the previous result from [1] by stating the main indentifiability result as a more practically useful matrix rank condition which can be easily verified (Thm. 3). This allows the authors to extend the result to several interesting cases - such as more than 2 experts (Corr. 4), MDPs with 'exogenous variables' (Corr. 6), Linear reward features (Thm. 7), and sample estimated transition dynamics matrices (Thm. 8). The authors also show how their result can be used to derive a reward generalization result (Thm. 11) under weaker conditions, and present numerical experiments on synthetic tabular MDPs to verify their theoretical results.\n\nI have checked the in-text derivations fairly closely and am familiar with prior theory in this area. I have not checked the proofs or extended derivations in the appendices. # Originality\n\nAlthough this paper is a re-statement of an existing result from [1] (as the author's acknowledge), this re-statement is more practically useful and enables multiple extensions, as well as numerical validation and application of the result.\n\n# Quality\n\nThe authors have done a good job engaging with prior work. Overall, this paper is of high quality and feels solid and complete, although the paper is presently missing a conclusion section / paragraph - I encourage the authors to add one to their final submission.\n\n# Clarity\n\nThis paper is clearly written, although some of the notation, language and definitions could be tidied up. For instance;\n\n * The language switches between \"Markov Decision Process' (ex. line 91) and '... Problem' (ex. Line 117). Please be consistent unless these imply different things in your writing, in which case both terms should be defined.\n * Line 92 - it may clarify the definitions to explicitly state that $\\mathcal{S}$ and $\\mathcal{A}$ are finite.\n * Line 115 - Do you mean '[additive] constant'?\n * On line 119 and elsewhere $v$ denotes a _function_, whereas on line 137 and following it now denotes a _vector_. Please clarify the notation to disambiguate these if possible.\n * Consider annotating matrix expressions (e.g. Eq. 4) with the dimensions of each term, to help the reader understand the notation. Alternately, I suggest adding a 'bar' between the columns of block matrices such as the leftmost term in Eq. 4 to clarify the matrix contents.\n\nThe inclusion of an 'intuition' for the proof of Def. 10 (Lines 225-229) is very helpful. Please consider adding similar intuition / proof sketch text for more of your other results. This will aid less-technical readers in following the material.\n\nPlease ensure figures are legible in greyscale. E.g. consider using a log color scale for Fig 1d and 1e, as well as Fig 3e. Please include x and y axis labels for Figs 1 and 3 (I believe the axes are states and actions?).\n\n# Significance\n\nThis paper contributes a well scoped and executed body of theory work to the IRL literature, and creates multiple opportunities for future authors to build on the results. For instance, by extending the tabular MDP results to derive probabilistic bounds in approximate IRL settings, or by applying the results in practical applications of IRL to real-world MDP problems. * The generalization pattern from 2 experts to 3 (and beyond) is not clear to my by comparing Eq 5. and Eq. 6. I.e. I would not be able to predict the structure of the equivalent rank condition for 4 experts. Can you please clarify exactly how the rank condition generalizes as the number of experts increases beyond 3? This is particularly important as other results (Thm. 7, 11) build on this expression.\n\n * Line 28 - 'highly parameterized and represented by a low-dimensional set of parameters' is a little unclear - are you alluding to Deep Neural Network reward representations here?\n\n * Line 121 - the main advance beyond [1] is that the identifiability result is easy to verify in practice (as stated on Line 121). Can you please include a brief sketch / outline in your paper illustrating why the result in [1] is _not_ easy to verify in practice, for readers not familiar with this work.\n\n * Line 243 - \"$T_{a_0}$ commutes with $T_a ~~\\forall a \\in \\mathcal{A}$\". It is difficult to intuit / conceptualize what this condition implies for the structure of the MDPs. Can you please provide a brief explanation of what this commutativity requirement implies for the MDP dynamics? Perhaps with a small example? Nil comments here.", " This paper investigates the problem of identifying a reward function (up to a constant) having access to (entropy-regularized) optimal policies of multiple experts on different environments and with different discount factors (this result was previously known only with two experts). In addition to this theoretical result, the authors provide a practical method for identifying such a reward. Finally, when assuming the reward is a linear combination of features, the authors demonstrate an interpretable reward function can be identified.\nThe authors present some empirical results on random matrices and grid worlds that showcase and confirm their theoretical results. # Originality\nThe work seems to be largely a followup of [1], but as far as I can tell, the work is original (including theory and experiments). I went through the proofs and as far as I can tell they appear to be novel and non-trivial (e.g. not just a direct application of prior results).\n\n# Quality\nThe theoretical results and proofs are of high quality and clarity. The empirical validations were done very well and properly highlighted and confirmed the theoretical results of the paper.\n\n# Clarity\nThe paper was in general very well written and presented. The motivating paragraphs before each Theorem were very clear and useful in giving a high-level intuition for the results.\nA few suggestions to help make the paper clearer:\n* In Definition 2 I'd suggest using subscripts (instead of superscripts) for the $v^1$, $v^2$s and $T^1$, $T^2$s, as otherwise they can be confused with exponents.\n* The paragraph describing the Strebulaev-Whited environment is quite difficult to follow. Although I appreciate the authors want to add a real-world-like environment, I feel it is mostly a distraction from the paper's flow and fails to effect its intended purpose (due to lack of clarity). Indeed, the actual figures are relegated to the appendix, so the main paper only contains a (too terse) description of this non-standard environment. I would suggest moving the environment's description to the appendix as well.\n\n# Significance\nThis is probably the weakest point of the paper for me. Although the authors present some high-level motivations in the first paragraph of the introduction, and then position it as resolving some of the issues with IRL, it remained unclear to me _when_ such a situation as explored by the authors would be useful. The Strebulaev-Whited environment, although less toy-ish than the grid worlds, is still quite artificial and unclear whether the situation considered (with different values of $\\sigma_{\\epsilon}$ has any ties to any real world situation.\n\nThese issues could have perhaps been addressed in a concluding section, but the authors chose not to include one. I would suggest removing the Strebulaev-Whited environment description (as suggested above) and adding a conclusion instead. In particular, I would suggest they clarify _when_ these methods can be useful and ideally give examples of situations where you have multiple experts operating under different environments/discount factors but with the same reward function. 1. A high-level question I had is: wouldn't it be more realistic to assume that you have multiple experts operating with different _rewards_ as opposed to different environments/discounts? I realize this is a completely different problem, but I feel it should at least be discussed when motivating the problem (see points on significance above).\n2. In line 157 it says that the condition of Theorem 3 \"is easier to check in practice\" than Definition 2, why is this? Is it because there's no need of the $v$s? It would be good to clarify.\n3. In the proof of Theorem 3 the statement \"is an eigenvector of A with eigenvalue 0, and corresponds to the invariance of the optimal policy under addition of a constant to the reward function.\" is not clear).\n4. The statement in line 586 in Appendix (\"This is hence equivalent...\") is not clear at all. Please add some more clarifying text or a reference to an established result (it's fine if it points to a chapter in a textbook, say).\n\nIn general, statements of the form above (which are not trivially true) should be clarified or referenced rather than left \"as an exercise to the reader\".\n\n5. In line 313, is $\\beta$ essentially just the standard $\\gamma$ discount factor? I don't feel the authors added much discussion around limitations of their work; most of the discussions and results served only to highlight how their results improved on prior results. From a practical perspective it would have been nice to have a bit more of a nuanced discussion here (see points above regarding significance).\n\nNo negative societal impact of the work was provided.", " The authors propose an approach to solving the ill-posedness of IRL by exploiting multiple experts. In particular, a necessary and sufficient condition on the expert environments is derived, and further extended in different scenarios (e.g. approximated dynamics, linearly parametrized reward). Besides a negative result, a milder generalization requirement is given such that any recovered reward suffices to train an optimal policy in a given other environment.\n\nThe work is mainly theoretical (even though the conditions are numerically verifiable), and limited by a number of strong requirements, but the results are clearly stated, their impact is well described, and the findings constitute a good contribution to the literature. The paper is well written, and it is mathematically sound. The theoretical results are well described, their requirements clearly stated, and their impact properly discussed.\n 1) In Theorem 7, does the real reward need to belong to the same parametrized space of the recovered reward? Please clarify this in the text\n\n2) The work offers only theoretical conditions to check if the expert's reward con be properly reconstructed but does not comment on how this can be done. Which IRL algorithm can be used to be sure that the recovered reward is actually compatible with the expert's demonstrations?\nFor example, in the numerical results, how was the reward identified? N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "52_PoFdyO9p", "Ia11C9GEdiY", "0OXvquvYOU-B", "bcr83JSDalV", "RXjb9h8Bpf2", "nips_2022_KFxIsdIvUj", "O8pTeEFG92H", "IQPf77_JIjc", "_NeFRkdx5mG", "ct-zqnn3erWV", "l2-LOy1Jyky", "qW9xNVZ5cpN", "ECJ9GecyE8u", "eCA2SPjLsZ", "1h5TMpLY4S", "nips_2022_KFxIsdIvUj", "nips_2022_KFxIsdIvUj", "nips_2022_KFxIsdIvUj", "nips_2022_KFxIsdIvUj", "nips_2022_KFxIsdIvUj" ]
nips_2022_4kjQZTNz-NH
AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos
This paper studies the problem of real-world video super-resolution (VSR) for animation videos, and reveals three key improvements for practical animation VSR. First, recent real-world super-resolution approaches typically rely on degradation simulation using basic operators without any learning capability, such as blur, noise, and compression. In this work, we propose to learn such basic operators from real low-quality animation videos, and incorporate the learned ones into the degradation generation pipeline. Such neural-network-based basic operators could help to better capture the distribution of real degradations. Second, a large-scale high-quality animation video dataset, AVC, is built to facilitate comprehensive training and evaluations for animation VSR. Third, we further investigate an efficient multi-scale network structure. It takes advantage of the efficiency of unidirectional recurrent networks and the effectiveness of sliding-window-based methods. Thanks to the above delicate designs, our method, AnimeSR, is capable of restoring real-world low-quality animation videos effectively and efficiently, achieving superior performance to previous state-of-the-art methods.
Accept
The paper proposes a method for super-resolution of animation videos. The contribution is three-fold: a new approach to learned image degradations, a dataset of high-resolution animation videos, and a multiscale model architecture. The method demonstrated good empirical results while being substantially faster than prior approaches. All reviewers are positive about the paper (although to a different degree) and mention that the proposed "learned basic operators" are interesting and new, the dataset is valuable, and the method is thoroughly evaluated and works well. Overall, the paper is a solid application paper with some interesting new ideas and I recommend acceptance. I highly encourage the authors to update the paper based on the discussions with the reviewers, in particular with the details on dataset creation and the rescaling factor.
train
[ "hlvLkVJg0C_", "E_Xsf-JE0-K", "zhPGnxeTmEI", "CwQGqEsXmbB", "04Fk-7XXpR7", "uQoyiEpgYCI", "mFFqG3i0e0t", "95CkRBoQUGt", "t6yxYEP58R", "T-ir0ZWXhI_", "n5uFR54A28x", "zEusZnOSHbZ", "GUm5Lk-FcY" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your feedback.\n\nWe are glad that the detailed comments and explanations can clarify the unclear parts. We will add those detailed descriptions to the manuscript.\n\nWe agree with the reviewer that the human labor in the rescaling factor selection is a limitation. As explained above, such manual selections are only conducted on three LQ videos and then the learned neural operators can well generalize to a large number of real-world LQ videos. So, the method is still valuable and does not take much time.\n\nWe make the first attempt to explore the underlying characteristics among different rescaling factors and involve manual selection in our method. We believe there will be better designs for rescaling factor selection, which can be left a future investigation.", " I thank the authors for the detailed comments and explanations.\n\nMost of the information I wanted to know is clarified in the rebuttal.\nHowever, I believe it is worth adding such detailed descriptions in the manuscript to avoid any potential confusion for readers.\n\nWhile I lean towards a positive impression, one remaining concern is that I still think this method requires human labor in rescaling factor selection. \nAlthough that makes the proposed method work well, it is a limiting factor making it hard to extend to other datasets or settings.", " Thanks for your feedback :-)\n\n**1.** We think that FRVSR is not a strict sliding-window-based method. Sliding-window-based methods (such as EDVR[1], TOFlow[2], TDAN[3], VESPCN[4]) typically take several LR frames as inputs for restoration, usually including past frames, the current frame, and the future frames. FRVSR indeed uses the $LR_{t-1}$ frame, but for the flow estimation purpose. FRVSR does not take $LR_{t-1}$ as the input for restoration directly. Besides, it does not use future frames. \n\nThat is the reason why we do not think FRVSR is a strict sliding-window-based method.\nBut from a broader view, we *agree with the reviewer* that FRVSR can be regarded as a sliding-window-based method, since FRVSR uses both $LR_{t-1}$ and $LR_t$ for each time step $t$.\n\nWe will clarify the above discussions.\n\nWe then want to highlight that the emphasis of our method is different from FRVSR. FRVSR is the unidirectional recurrent structure and only uses the past and the current frames, while our method also wants to utilize the future frames to boost the performance.\nLeveraging the future frame information is the main feature of bidirectional recurrent structure and sliding windows (with future frames).\nOur method takes advantage of both the efficiency of the unidirectional recurrent structure and the effectiveness of future frames from sliding windows (with future frames).\n\nBesides, our method is also different from FRVSR in the following aspects.\n\n1). We further adopt the multi-scale design. Due to the characteristics of animation videos, we observe that rescaling inputs of animation videos can achieve a better balance of detail enhancement and artifact elimination. Thus, we incorporate the multi-scale design and leverage different rescaling inputs implicitly inside the network. \n\n2). We do not use flow estimation and warp components in our method. This is mainly due to the inference speed consideration. From our experiments, we do not observe an apparent performance drop when we remove flow estimation and warp.\n\n3). Our experimental results show that directly applying FRVSR network structure is inferior to our structure.\n\n|Network Structure | MANIQA score **↑ (high is better)**|\n| ----------- | ----------- |\n|FRVSR| 0.3583|\n|Ours | **0.3839**|\n\n**2.** Yes, $LR_{t-1}$, $LR_t$, $LR_{t+1}$ and $SR_{t-1}$ are concatenated together as the input of the multi-scale recurrent block. $SR_{t-1}$ is first downsampled by pixel-unshuffle operation to match the spatial resolution with other frames.\nWe use concatenation as it can achieve fast inference speed while utilizing extra information and obtaining good performance.\n\nHope those clarifications can address your concerns. Please let us know if you still have any unclear parts of our work. Thanks :-)\n\nBest,\n\nPaper 1447 Authors\n\n[1] EDVR: Video Restoration with Enhanced Deformable Convolutional Networks\n\n[2] TOFlow: Video Enhancement with Task-Oriented Flow\n\n[3] TDAN: Temporally-Deformable Alignment Network for Video Super-Resolution\n\n[4] VESPCN: Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation", " Thank the authors for the response. It addresses most of my concerns, especially Q1. I still have some questions about Q2. \n1. I think FRVSR is also based on sliding windows and the sliding windows only include the previous lr frame, the previous hr frame and the current lr frame. The architecture is unidirectional recurrent.\n2. In the Figure 6a, may i know whether $LR_{t-1}$, $LR_{t}$, $LR_{t+1}$ and $SR_{t-1}$ are concatenated together as the input of the multi-scale recurrent block?\n", " Dear Reviewer Pf9p,\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work. Thanks :-)\n\nBest,\n\nPaper 1447 Authors", " We thank Reviewer UxQQ for the valuable reviews.\n\n---\n\n**`Q1:` Limited novelty. Many components are simple modifications from existing work (e.g., combining the classic operators and neural-network-based operators).**\n\n1. We introduce a new degradation synthesis paradigm. Although methods like BSRGAN and Real-ESRGAN employ classic basic operators to form a complex degradation model. The degradation model still could not cover the real-world degradation space. With the learnable basic operator, the degradation space can be largely expanded and can cover more real degradations. In addition, the learnable basic operators are quite different from existing neural-network-based operators. It is challenging to adopt one large neural network to learn the whole degradation process and the entire complicated degradation distribution. In this paper, we use LR - pseudo HR pairs to train the learnable basic operators by taking into account the properties of animation videos. Our experiments also show excellent results in simulating real-world degradations and restoring practical low-quality videos. Reviewer 7uTp and Reviewer Pf9p also acknowledge its novelty and promising values.\n\n2. Besides the degradation model, we also collect a high-quality animation video dataset, and propose a compact VSR network based on the characteristics of animation videos. The network helps to restore real-world LQ animation videos effectively and efficiently ($13.7$× faster than BSRGAN and $5.9$× faster than RealBasicVSR). The high efficiency of our proposed compact VSR network makes our AnimeSR a more practical animation VSR model. Furthermore, our paper is also the first comprehensive work about animation VSR.\n\n---\n\n**`Q2:` The training heavily relies on the input-rescaling strategy, which restricts the scalability of work to the animation videos.**\n\nIn this work, we focus on the animation video SR task. We investigate the underlying characteristics of animation videos and observe that rescaling inputs of animation videos achieve a better balance of detail enhancement and artifact elimination. Based on this observation, we propose the input-rescaling strategy for animation videos. Due to the different characteristics of natural videos and animation videos, we agree with the reviewer that the input-rescaling strategy cannot be directly applied to natural videos.\n\nHowever, the strategy of combining learnable basic operators and classic basic operators is not restricted to animation videos. Learnable basic operators can expand the degradation space and can cover more real degradations. The input-rescaling strategy proposed in this paper is just one of the strategies specially designed for animation videos. We will discuss its effectiveness on textures in natural scenes in the revised version. Other strategies are left to investigate for natural videos.\n\n---\n\n**`Q3:` How to determine the best rescaling factor for each video?**\n\nThe input-rescaling strategy is **only** used in learning the learnable neural operators. The best rescaling factor is selected with a combination of algorithms and manual selection/verification. However, the algorithms' performance is not always satisfactory, especially for distinguishing the fine-grained artifacts. Thus, manual selection/verification is necessary.\n\nThe details of the rescaling factor selection are as follows.\n\n1. Select patches with textures and edges, because unpleasant artifacts are most likely to appear in those areas. The selection is based on edge detection.\n2. Evenly sample rescaling factors from (0, 1] with an interval of 0.1. For each rescaling factor, LR patches are rescaled and are then sent to the BasicOP-Only VSR model to get the SR results.\n3. Sort the rescaling factor based on the number of artifacts.\nSelecting results with the fewest artifacts is very challenging, we have tried several image assessment algorithms (e.g., NIQE, hyperIQA), but do not get a satisfying result. Empirically, we compare simple image statistics for the assistant, followed by a manual selection. Specifically, we calculate image gradients of LR and SR patches, and then sort the rescaling factors according to the similarity between the LR's normalized gradients and SR's. The intuition is that the unpleasant artifacts in animation videos, such as \"hollow lines\" artifacts and unwanted textures (Figure 5 in the main paper) usually produce a difference in image gradients.\n4. Based on the sorting from Step 3, we manually select the best rescaling factor and pseudo HR according to human perception.\n\nNote that models trained with a few learnable neural operators ($3$ LQ videos in this work) can generalize well to a lot of real-world videos. So, even with some manual selections, the method is still valuable and does not take much time.\n\n---\n\n**`Q4:` The writing can be improved. Sec. 4.2 is confusing.**\n\nThanks for your suggestion. We compressed the contents in Section 4.2 in the submission version. We will improve it.", " We thank Reviewer Pf9p for the valuable reviews. Thanks for the acknowledgment on the interesting and promising approach of combining classic basic operators and tiny neural networks together. Our responses are as follows.\n\n---\n\n**`Q1:` The use of the “input-rescaling strategy” is not demonstrated well. How do the users manually select the pseudo HR? It seems time-consuming.**\n\nSorry for the unclear description. We provide more details about the use of input-rescaling strategy and the pseudo HR selection.\n\n**1. The input-rescaling strategy is used to learn the learnable neural operators.**\n\nThe overall pipeline of training AnimeSR can be briefly summarized as follows:\n\n**Step 1: Construct the degradation model by combining classic basic operators and learnable neural operators.**\nThe learnable operators are learned from a few real-world low-quality videos (**$3$ low-quality videos** in this work and those videos are not overlapped with AVC-RealLQ). With the input-rescaling strategy, we can generate pseudo HR frames, and then adopt the low-resolution and pseudo HR pairs to train the learnable operators.\n\n**Step 2: Train the SR network with the AVC-Train dataset.**\nOn the high-quality AVC-Train dataset, we synthesize LR frames with the degradation model obtained from Step 1. We can then train the SR network. In Step 2, we do not use the input-rescaling strategy.\n\n**2. Select the best rescaling factor and pseudo HR.**\nWe conduct the rescaling factor selection with a combination of algorithms and manual selection/verification. However, the algorithms' performance is not always satisfactory, especially for distinguishing the fine-grained artifacts. Thus, manual selection/verification is necessary.\n\nThe details of the rescaling factor selection are as follows. The SR results corresponding to the best rescaling factor are regarded as pseudo HR.\n\n1. Select patches with textures and edges, because unpleasant artifacts are most likely to appear in those areas. The selection is based on edge detection.\n2. Evenly sample rescaling factors from (0, 1] with an interval of 0.1. For each rescaling factor, LR patches are rescaled and are then sent to the BasicOP-Only VSR model to get the SR results.\n3. Sort the rescaling factor based on the number of artifacts.\nSelecting results with the fewest artifacts is very challenging, we have tried several image assessment algorithms (such as NIQE, hyperIQA), but do not get a satisfying result. Empirically, we compare simple image statistics for the assistant, followed by a manual selection. Specifically, we calculate image gradients of LR patches and SR patches, and then sort the rescaling factors according to the similarity between the LR's normalized gradients and SR's. The intuition is that the unpleasant artifacts in animation videos, such as \"hollow lines\" artifacts and unwanted textures (Figure 5 in the main paper) usually produce a difference in image gradients.\n4. Based on the sorting from Step 3, we manually select the best rescaling factor and pseudo HR according to human perception.\n\n**3. The selection is only performed for a few videos.**\nModels trained with a few learnable neural operators can generalize well to a lot of real-world videos. In this work, the learnable neural operators from $3$ LQ videos can well generalize to a large number of real-world LQ videos. So, even with some manual selections, the method is still valuable and does not take much time. Note that the manual selection is only performed in learnable neural operators. The training dataset, whose size is much larger, does not involve the selection.\n\n---\n\n**`Q2:` The intuition behind sliding window structure and multi-scale design is not demonstrated well. How to distinguish it from FRVSR? FRVSR is also based on sliding windows.**\n\n1. Unidirectional recurrent architecture can only propagate the information from past frames and cannot propagate the information from future frames. Bidirectional recurrent is a feasible solution to get future frame information. But it takes double time and makes it unpractical. We combine unidirectional recurrent architecture and sliding window. The sliding window design only needs to look forward one more frame to obtain future information, while can still have the efficiency of unidirectional recurrent architecture.\n2. As analyzed in Sec. 4.1 in the main paper, rescaling inputs of animation videos achieves a better balance of detail enhancement and artifacts elimination. Motived by this, we also adopt multi-scale architecture in animation VSR, which can leverage different rescaling inputs implicitly inside the network. The network can learn to utilize the features with the best scale levels.\n3. We would like to kindly point out that FRVSR is not based on sliding windows. Instead, FRVSR is a unidirectional recurrent architecture and does not utilize future frames. In contrast, our architecture further incorporates the sliding window and multi-scale designs.", " We thank the reviewer for the insightful reviews. Thanks for the acknowledgment on the novel and effective approach of combining the known operators (blur, noise, FFMPEG) and learnable operators (conv layers). Our responses are as follows.\n\n---\n\n**`Q1:` Clarification on the dataset and overall pipeline.**\n\nWe first clarify some misunderstandings about the dataset and the overall pipeline. Sorry for the unclear descriptions in the main paper. We will improve related descriptions.\n\nThe **training partition** in the AVC dataset (AVC-Train) is constructed by collecting **high-quality** animation video clips.\nThe AVC-RealLQ partition is only for **testing**, which includes real-world **low-quality** videos.\n\nThe overall pipeline of training AnimeSR can be briefly summarized as follows:\n\n**Step 1: Construct the degradation model by combining known operators and learnable operators.**\nThe learnable operators are learned from a few real-world low-quality videos (**$3$ videos** in this work and those videos are not overlapped with AVC-RealLQ). The input-rescaling strategy is used to generate pseudo high-resolution (HR) frames. We then adopt the low-resolution (LR) and pseudo HR pairs to train the learnable operators.\n\n*Note that*:\n\n**1)** We use the input-rescaling strategy and generate pseudo HR for only three videos, whose size is very small compared to the training dataset (AVC-Train) used in Step 2.\n\n**2)** Those learned operators from 3 videos can **well generalize** to a large number of real-world LQ videos (*e.g.*, videos in AVC-RealLQ).\n\n**Step 2: Train the SR network with the AVC-Train dataset.**\nThe AVC-Train dataset only contains high-quality video clips. We synthesize low-resolution (LR) frames with the degradation model obtained from Step 1. With the synthesized LR and HR pairs, we can then train the SR network. As the degradation model from Step 1 can better capture real-world degradations, the SR network thus performs better in practical scenarios.\n\n*Note that*:\n\n**1)** In Step 2, we do not use the input-rescaling strategy. Thus there is no need to select the best rescaling factor.\n\n**2)** We adopt HR clips in AVC-Train and the synthesized LR clips for training. There is no pseudo HR in Step 2.\n\n---\n\n**`Q2:` What the main principle is in the collected dataset?**\n\nThe SR task relies on high-quality datasets (*e.g.*, DIV2K for image SR and REDS for video SR). Those datasets typically serve as the Ground-Truth, and we usually train SR networks by synthesizing low-resolution counterparts from those datasets.\n\nThe existing datasets for animation-related tasks are of low quality and contain only single images or triplet frames. Thus, considering the requirements for animation video SR, we conclude the following principles for the collected AVC dataset.\n\n1. **High quality**. We ensure the video quality by bit rate, frame resolution, and subjective quality. We also use an image assessment quality algorithm (hyperIQA) to ensure the quality of each frame.\n2. **dynamic and meaningful scenes**. We select scenes with motions for video tasks. Static and simple scenes (*e.g.*, scenes with black or pure colors) are discarded.\n3. **diversity and amount**. They are common principles and are considered for better performance and generalization.\n\nWe first employ existing algorithms for **automatic filtering**, followed by **manual selection/verification**, as automatic filtering is not always perfect.\n\nThe AVC-RealLQ dataset is constructed to fully evaluate the model’s ability in practical scenes and assess the generalizability of VSR methods.\n\nNote that the input-rescaling strategy and pseudo HR are only used for **learnable operators in the degradation model**. They are not related to the AVC dataset, and they are not used in training SR networks.", " **`Q3:` The rescaling factor selection seems to rely on human perception.**\n\nWe conduct the rescaling factor selection with a combination of algorithms and manual selection/verification. However, the algorithms' performance is not always satisfactory, especially for distinguishing the fine-grained artifacts. Thus, manual selection/verification is necessary.\n\nThe details of the rescaling factor selection are as follows.\n\n1. Select patches with textures and edges, because unpleasant artifacts are most likely to appear in those areas. The selection is based on edge detection.\n2. Evenly sample rescaling factors from (0, 1] with an interval of 0.1. For each rescaling factor, LR patches are rescaled and are then sent to the BasicOP-Only VSR model to get the SR results.\n3. Sort the rescaling factor based on the number of artifacts.\nSelecting results with the fewest artifacts is very challenging, we have tried several image assessment algorithms (such as NIQE, hyperIQA), but do not get a satisfying result. Empirically, we compare simple image statistics for the assistant, followed by a manual selection. Specifically, we calculate image gradients of LR patches and SR patches, and then sort the rescaling factors according to the similarity between the LR's normalized gradients and SR's. The intuition is that the unpleasant artifacts in animation videos, such as \"hollow lines\" artifacts and unwanted textures (Figure 5 in the main paper) usually produce a difference in image gradients.\n4. Based on the sorting from Step 3, we manually select the best rescaling factor according to human perception.\n\nIt is worth noting that one of the main contributions of our paper is the combination of classic operators and learnable neural operators, to get a more accurate degradation model. We then investigate how to learn such learnable neural operators. Based on the characteristics of animation videos, we propose the rescaling factor selection, which is the first attempt to explore the underlying characteristics among different rescaling factors. We believe there will be better designs for future investigation.\n\nBesides, models trained with a few learnable neural operators can generalize well to a lot of real-world videos. In this work, the learnable neural operators from $3$ LQ videos can well generalize to a large number of real-world LQ videos. So, even with some manual selections, the method is still valuable and does not take much time.\n\n---\n\n**`Q4:` The authors use NIQE and MANIQA metrics. Any correlations with those metrics?**\n\nBoth NIQE and MANIQA are non-reference image assessment metrics.\nThey are correlated on a coarse scale, but usually have different scores on a finer scale.\n\nIn detail, NIQE is based on natural scene statistics and hand-crafted features, while MANIQA is a learning-based approach and employs a powerful ViT to extract features. MANIQA is the winner solution of \"NTIRE 2022 Perceptual Image Quality Assessment Challenge Track 2: No-Reference (NR)\" and outperforms previous SoTA methods on various datasets.\n\nIn this work, we use both NIQE and NAMIQA for reference.\n\n---\n\n**`Q5:` L172-174 and Fig. 5, what is the process of input-rescaling strategy. Is input low-quality video frame downsampled and then super-resolved from a BasicOP-Only method?**\n\nYes, the input LQ inputs are downsampled with different rescaling factors and then super-resolved from a BasicOP-Only method. BasicOP-Only method is trained with degradation models consisting of only classic operators (blur, noise and FFMPEG).\n\nThen, we adopt the selection process (as described in **Q3**) to select the best rescaling factor. The SR results under the best rescaling factor, namely the pseudo HR, are used to train the learnable operators.", " **`Q6:` Is Figure 9 a result of equation (2)-like cascade of blur, noise, and LBO? In equation (2), does two LBO share weights?**\n\n1. Fig. 9 is not a result of equation (2). Fig. 9 visualizes what the $3$ LBOs have learned and how they differ from classic basic operators like classical Gaussian blur and Gaussian noise. All the LQ patches in Fig. 9 are synthesized from the same HQ patch by using **only** one LBO or **only** the Gaussian blur/noise, instead of a cascade result.\nWe train $3$ different LBOs and their corresponding synthesized LQs are visualized.\n\n2. The two LBO in Equation (2) are random selections from an LBO pool ($3$ in this paper). So they are can be different (with different weights), and it is also possible to select the same LBO in Equation (2).\n\n---\n\n**`Q7:` Blur and noise are explicitly modeled in equation (2) even without LBO. Why does LBO learn to blur and noise images according to L318?**\n\nIn L315-L318, the paper describes: \"As shown in Fig. 9, the degradations learned by LBO are **pretty different from classic basic operators, e.g., blur and noise**. It seems that LBO captures a mixture of blur and noise. We can also observe **color jitter around lines** in the LBO-1 neural operator, which is common in highly-compressed LQ videos.\"\n\nWe do not mean that LBO learns to blur and noise images. Instead, the degradations learned by LBO are **pretty different** from classic basic operators. Visually, it seems to be a mixture of blur and noise. Here, the blur and noise are not actually Gaussian blur and Gaussian noise, but a vague human perception. This is not an accurate description and we will improve it.\n\n---\n\n**`Q8:` Is one LBO learned per video?**\n\nYes, one LBO is learned per video. We learn $3$ LBO in this paper, and it shows great generalizability to a large number of real-world LQ videos (*e.g.*, videos in AVC-RealLQ).", " This paper proposes an animation video super-resolution method. \nFirst, the authors build the proposed AVC dataset by collecting low-quality videos and by generating pseudo-HR videos. Pseudo-HR videos are obtained by rescaling low-quality videos and super-resolving them with a conventional SR method. \nSecond, the degradation of animations are modeled by the combination of conventional corruption artifacts and learnable convolutional layers. The authors train their degradation model with the AVC dataset LR & pseudo-HR pairs.\nThird, the proposed model architecture, window-based RNN is trained by the pseudo-HR dataset and the generated LR videos.\n [Strengths]\n\nThis paper tackles the animation super-resolution problem by observing the distribution of animation and natural videos differ. The observation is validated by comparing the SoTA methods and the proposed method on animations.\nIn order to handle the unknown degradation artifacts that are hard to model, the proposed method combines the known operations (blur, noise, FFMPEG) and learnable operations (conv layers). This is a novel and effective approach.\n\nAlso, the authors construct pseudo-HR data from their observations which exhibits validity through experimental results.\n\n[Weaknesses]\n\nHowever, I wonder what the main principle is in the collected & generated dataset.\nThe video selection seems to be manual (which is fine) and the rescaling factor selection seems to rely on human perception. Were there any analyses that are not shown in the paper?\nThe authors use NIQE and MANIQA metrics. Any correlations with those metrics?\nIf the selection process is manual, it doesn’t seem to be a reproducible and scientific approach.\n L172-174 and Fig. 5. \nI don’t exactly understand what the process of input-rescaling strategy is. Is input low-quality video frame downsampled and then super-resolved from a BasicOP-Only method?\n\nFigure 9\n\nIs this a result of equation (2)-like cascade of blur, noise, and LBO?\nIn equation (2), does two LBO share weights?\n\nL318 \n\nBlur and noise are explicitly modeled in equation (2) even without LBO. Why does LBO learn to blur and noise images?\n\nL306\n\nIs one LBO learned per a video? Limitations discussed in the manuscript.", " This paper aims to improve blind video super-resolution (VSR) for animation videos. It combines two categories of degradation synthesis processes (classic basic operator and neural networks) to overcome both limitations. Besides, the authors build a new high-quality animation video dataset for animation VSR. Moreover, the paper designs an efficient network structure for VSR based on sliding window structures and multi-scale design. The experiments reveal that the proposed model outperforms other state-of-the-art models and validate the effectiveness of the model design. Strengths:\n1. It is interesting and seems promising to combine classic basic operators and neural networks together and learn basic operators by neural networks.\n2. The authors build a large-scale high-quality animation video dataset for animation VSR.\n3. There are plenty of experiments in the paper to validate the effectiveness of the model design. The experiments demonstrate that the proposed methods perform better than other state-of-the-art models and the combination of classic basic operators and neural networks is beneficial.\n\nWeaknesses:\n1. The use of the “input-rescaling strategy” is not demonstrated well. How do the users manually select the pseudo HR? It seems time-consuming to do it.\n2. The intuition behind sliding window structure and multi-scale design is not demonstrated well. How to distinguish it from FRVSR? FRVSR is also based on sliding windows.\n 1. The author should present how to use the “input-rescaling strategy” in detail and discuss the time-consuming issue.\n2. The author should differentiate the proposed method from FRVSR, especially the sliding window design. The authors adequately addressed the limitations and potential negative societal impact of their work.", " This paper aims to develop an effective and efficient real-world video super-resolution (VSR) method to restore real-world low-quality animation videos. The authors first proposed to learn basic operators with tiny neural networks (with 2 or 3 conv layers) to incorporate the learned networks into the degradation synthesis process. Secondly, the authors propose a new dataset AVC which is a large-scale high-quality animation video dataset for training and evaluation. The authors perform experiments on AVC dataset and demonstrate superior performance compared to state-of-the-art models. Strength: Instead of using one large neural network for the whole degradation process or classic basic operators without any learning capability, this paper proposes a novel method to learn basic operators with tiny neural networks. Meaningful discussion is also a strength of this paper (e.g. discussion on input-rescaling strategy, visualization of learned LBOs). Further, this paper presents a new dataset that contains more than a total of 50,000 frames, which is one of the main contributions of this paper.\n\nWeakness: The writing can be improved. Section 4.2 is particularly confusing. The novelty of the approach seems limited. In particular, many components presented in this work are simple modifications from existing work (e.g. combining the classic basic operators and existing neural-network-based operators). Further, the proposed supervised manner of training heavily relies on an input-rescaling strategy, which restricts the scalability of work to the animation videos.\n 1. Does the proposed input-rescaling strategy only work for animation videos? Since the authors anticipate that the input-rescaling strategy will not work well for natural videos due to textures, ablation studies on a texture could be helpful to quantitively investigate the scalability of this work.\n\n2. How to determine the best rescaling factor of inputs for each video? Since the optimal rescaling factor varies on each video, pseudocode to determine the rescaling factor can help understand the overall process.\n Societal Impact: There is no significant negative societal impact of this research." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "E_Xsf-JE0-K", "T-ir0ZWXhI_", "CwQGqEsXmbB", "mFFqG3i0e0t", "zEusZnOSHbZ", "GUm5Lk-FcY", "zEusZnOSHbZ", "n5uFR54A28x", "n5uFR54A28x", "n5uFR54A28x", "nips_2022_4kjQZTNz-NH", "nips_2022_4kjQZTNz-NH", "nips_2022_4kjQZTNz-NH" ]
nips_2022_NgIf3FpcHie
Rethinking Alignment in Video Super-Resolution Transformers
The alignment of adjacent frames is considered an essential operation in video super-resolution (VSR). Advanced VSR models, including the latest VSR Transformers, are generally equipped with well-designed alignment modules. However, the progress of the self-attention mechanism may violate this common sense. In this paper, we rethink the role of alignment in VSR Transformers and make several counter-intuitive observations. Our experiments show that: (i) VSR Transformers can directly utilize multi-frame information from unaligned videos, and (ii) existing alignment methods are sometimes harmful to VSR Transformers. These observations indicate that we can further improve the performance of VSR Transformers simply by removing the alignment module and adopting a larger attention window. Nevertheless, such designs will dramatically increase the computational burden, and cannot deal with large motions. Therefore, we propose a new and efficient alignment method called patch alignment, which aligns image patches instead of pixels. VSR Transformers equipped with patch alignment could demonstrate state-of-the-art performance on multiple benchmarks. Our work provides valuable insights on how multi-frame information is used in VSR and how to select alignment methods for different networks/datasets. Codes and models will be released at https://github.com/XPixelGroup/RethinkVSRAlignment.
Accept
This paper re-thinks the role of alignment in video super-resolution based on transformer models. The video alignment is costly which may need manual efforts. This paper proposed several inspiring and counter-intuitive remarks, such as that alignment is unnecessary and may be harmful to the transformer model. The authors presented a new model with only patch alignment instead of the pixel ones and larger window size, which achieves non-trivial improvements over the SOTA methods. Most of the reviewers agreed with the contribution and significance of this paper to the community.
train
[ "jhPxOIanD1", "MbITuTiGqAu", "25oPGa8gC30", "vmgqDU6JCm4", "7OB7QjqUFuA", "lYHVsy3f5jlg", "_T6hGQ9Cvc3", "SH7Vluvmf3Tb", "IFVlfem71pq", "65fr-g7hzQh", "0aIfqI2PHD-", "Qpn9TgOpJix", "NEbSjGDQY7q", "CKfYmWO3U4", "JSVHTMt4c1f", "DJTriNuno6m", "OHmgKXO_ifV", "szaeuSzPZ98", "gzzvuCpTxq7", "H0ILTT3guth", "7clqEYpHnNa" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed response. I would like to raise my rating.", " Dear Reviewer CC5y:\n\nThanks again for your precious time and valuable comments.\n\nInitially, Reviewer F7UK (denoted as R3) and Reviewer TWT3 (denoted as R4) both thought positively about our work. After rebuttal, Reviewer j86q (denoted as R2) now agrees that our response solves his/her concerns and rates our work acceptable.\n\nWe found that you and R2 share some similar concerns. The concerns include several points as follows:\n\n(1) Discussion with previous works, such as VRT, VSRT, and FGST.\n\n(2) Discussion of the FLOPs and runtime.\n\n(3) Other details about the experiments.\n\nFor all your concerns, we have provided responses. Are there any deficiencies in our rebuttal? Whether the corresponding responses and results we provide cover your concerns? The discussion period ending date is August 9. Please let us know if you have any unsolved or other concerns.\n\nThanks,\n\nPaper 1445 Authors.\n", " Thank you for your responses. The authors address my concern. I am happy to increase my rating.", " Dear reviewer TWT3:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,\n\nPaper 1445 Authors.\n\n", " Dear reviewer F7UK:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. In particular, we would like to address your concerns about our novelty. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,\n\nPaper 1445 Authors.\n\n", " Dear reviewer j86q:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,\n\nPaper 1445 Authors.", " Dear reviewer CC5y:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,\n\nPaper 1445 Authors.", " Dear Reviewers and ACs:\n\nWe sincerely appreciate all the reviews. They give positive and high-quality comments on our paper with a lot of constructive feedback.\n\nWe would like to emphasize that our work not only proposes a novel, simple and effective new method called patch alignment. Our work provides extensive analysis of existing alignment methods. We have developed and used a number of novel analytical tools. Our conclusions are insightful and meaningful. We would like the reviewers and ACs to focus on our analysis part. We believe that the contribution of this part to the field is as important or even more important than our new method.\n\nWe have updated our draft to incorporate the insightful suggestions of the reviewers:\n\nAccording to Reviewer 1 and Reviewer 2’s questions, we add a new section 3 in the supplementary material about \"The FLOPs and Runtime of the proposed method\".\n\nAccording to Reviewer 1 and Reviewer 2’s concerns, we add a new section 4 in the supplementary material about the special case of VSRT.\n\nFollowing Reviewer 3’s suggestion, we complete the description of the significance of our work in the conclusion section.\n\nFollowing Reviewer 4’s suggestion, we add more descriptions of the proposed method. We also adjust the space before and after the captions of the figures. Inspired by Reviewer 4’s comment, we discuss the scope, limitations and possible future work of our work in more detail in section 5 in the supplementary material.\n\nFor the other changes, the final performance of our method is presented. We also add the experiment of the PSRT-recurrent method using 30 frames for training.\n\nIn the final version, we will improve other minor points of Reviewer 1, Reviewer 2, Reviewer 3, and Reviewer 4. Thank you all for the valuable suggestions.\n\nThanks,\n\nPaper 1445 Authors.\n", " We thank Reviewer #4 for his/her comments and the appreciation of our novelty, experiments and new method. To the concerns expressed by Reviewer #4 in `Weaknesses`, `Questions` and `Limitations`, our response is as follows:\n\n`R4-Weakness`: Thanks for the suggestion. The proposed method is intuitive, and the method is easy to understand. We are concerned that introducing too many mathematical expressions will hinder the understanding of the method. We use the averaged patch motion vector to locate the corresponding patch in the supporting frame. The calculated motion vector represents the amount of movement of this patch in the supporting frame. We round this vector to avoid possible interpolation operations. Then we cut and move the entire patch found according to this vector to the same position as the patch in the reference frame. Reviewer #4 is encouraged to review Figure 7 and its caption for an intuitive understanding. We will also further improve this part of the presentation.\n\n`R4-Q1`: Thanks for the question. We present more details of the experiments involved in this paper in Section 2 of the Supplementary Material so that anyone can reproduce our results. An illustration of Figure 4 can also be found there. We paste the relevant presentation here for the reviewer to check. \n\nFigure 4 shows the performance differences between VSR Transformers with and without alignment modules for different pixel movements. The VSR Transformer backbone used in this figure contains 16 Multi-Frame Self-Attention Blocks (MFSABs). Similar to SwinIR, we add shortcut connections every 4 MFSABs. The feature dimension is 120, and the head number of the multi-head self-attention is 6. To plot the differences, we first partition the pixels into different groups according to their movement conditions and then calculate the mean square error for each group. We subtract the mean square errors of the VSR Transformer with alignment from the errors of the VSR Transformer without alignment. Thus, the parts greater than zero indicate better performance without alignment. For the first sub-figure, we study the image alignment. The window size is set to 8. We keep the other settings for the second sub-figure and enlarge the window size to 12. For the third sub-figure, we replace the image alignment to feature alignment. In addition to the 2D convolution feature extraction, we add one CNN residual block to extract deep features. These experiments are performed under the same training settings described in Section 3 in the main text.\n\n`R4-Q2`: Thanks for the suggestion. We will increase the space before and after the figures to promote the aesthetics of the paper.\n\n`R4-Limitation`: Thanks for the comment. We also believe that research on different degradations is necessary. The topic of this paper is Video SR, whose downsampling operation leads to unique LR patterns (shown in Figure 10). We believe that other downsampling methods will have similar effects, such as blurring and directly downsampling (the \"BD\" downsampling in other papers). For these downsampling methods, the conclusions of this paper are still valid. For other video restoration tasks, we believe that the proposed method will still lead to improvement since theoretically patch alignment preserves more information. But if patch alignment is applied directly, the resulted improvement may not be as big as in Video SR. Because the nature of its sub-pixel information will change for other applications, its network design can also be changed (such as adding multi-scale designs). We only have limited space in this paper. We emphasise the importance of research in this direction and reserve it for future work.\n", " We thank Reviewer #3 for his/her comments and the appreciation of our work. To the concerns expressed by Reviewer #3 in `Weaknesses` and `Questions`, our response is as follows:\n\n`R3-Weakness 1` and `R3-Q1`: Thanks for the comment. We respectfully disagree with your opinion on our novelty and topic.\n\n**Novelty**:\n\nFirst of all, the most important contribution of our work is not the method of patch alignment, but the analysis and conclusion with alignment and VSR Transformer. We use a number of novel analysis methods, such as (1) a case-by-case comparison of the effect of alignment on content under different motion conditions, (2) a novel video super-resolution attribution analysis, (3) the analysis of the distribution shift of fine-tuned optical flow, (4) the analysis of the smoothness of fine-tuned optical flow, and (5) quantitative comparison of the resampling methods in alignment. The finds are also novel to this community. Our findings challenge our common understanding of using Transformers to process multiple spatially misaligned images. The analysis of alignment can provide useful insights for VSR. We need to utilize inter-frame sub-pixel information, yet many image pre-process operations interfere with our utilization of this information. These observations hint that Transformer can implicitly make accurate connections for misaligned pixels. Many low-level vision tasks can take advantage of this property, such as video restoration, reference-based SR, burst image processing, stereo matching, flow estimation, etc. When designing Transformers for these tasks, we can no longer explicitly introduce the alignment modules or the cost volume modules, but give full play to the powerful modeling capabilities of the Transformer itself. These statements can be found in L335--L344 of the main text.\n\nSecond, the patch alignment is also effective and novel. It is the first time an alignment method is proposed to treat supporting frames as patches and keep pixels within the patch unchanged during processing. Contrary to reviewer #3, we believe it is important to keep the method simple, as this can demonstrate in the clearest way the core reasons for the success of our method. We would like to emphasize that the simple operation of the proposed patch alignment improves its performance on REDS by 0.33dB while saving nearly 2/3 of the parameters compared to the current state-of-the-art VSR Transformer model. This is enough to demonstrate the superiority and importance of the proposed method.\n\n**Note that none of the other three reviewers questioned our novelty**. Especially, Reviewer TWT3 commented: \"There is a certain novelty to this work\".\n\n**Significance**: \n\nOur response to Reviewer 3's concerns about the subject of this paper is as follows. First, there is no doubt that Video SR is a very important low-level vision task. The core issue of this task is to process sub-pixel information between multiple frames. In this regard, alignment and information transfer are the most important research topics. A large number of related works have been proposed to discuss the most efficient alignment method [4, 5, 6, 12, 18, 20, 21, 24, 35, 36, 40, 41, 44, 48, 49] for the VSR task. In state-of-the-art VSR models, the alignment module is also usually the most complex and bulky module. For example, alignment occupies 1/3 of the parameters in VRT, and 1/3 of the parameters in BasicVSR++. Our work presents novel insights in this regard. The proposed method optimizes the overly complex alignment design of the VSR Transformers [3, 21] into a near-zero-cost operation. This has demonstrated the importance of our research.\n", " `R3-Weakness 2` and `R3-Q2`: Thanks for the question. We have presented the comparison of our methods with baselines. There are two sets of comparisons. One is for the sliding window-based VSR Transformers, and the results are included in Table 2 and Table 3. Another is for the recurrent-based methods, and the results are included in Table 1 of the Supplementary Material. We paste these results here for a quick reference.\n\n**Sliding-window-based methods**:\n\nFor the VSR Transformers based on a sliding-window strategy, the proposed patch alignment outperforms state-of-the-art methods based on flow-guided deformable convolution (FGDC) while saving a large number of parameters.\n\n| Exp. Index | Alignment Method | Alignment Position | Resampling Method | Params (M) | REDS (PSNR/SSIM) |\n|------------|-------------------------------|--------------------|-------------------|------------|------------------|\n| 1 | No | -- | -- | 12.9 | 30.92/0.8759 |\n| 2 | Image alignment | Image | Bilinear | 12.9 | 30.84/0.8752 |\n| 3 | Feature alignment | Feature | Bilinear | 14.8 | 31.06/0.8792 |\n| 4 | Deformable Convolution (FGDC) | Feature | -- | 16.1 | 31.11/0.8804 |\n| 5 | Patch alignment | Image | Nearest Neighbor | 12.9 | 31.11/0.8800 |\n| 6 | Patch alignment | Feature | Nearest Neighbor | 14.8 | 31.17/0.8810 |\n\n**Recurrent-based methods**:\n\nFor the recurrent-based VSR Transformers, the baseline model is the BasicVSR++. As can be seen, replacing the CNN with Transformer blocks can bring a PSNR improvement of 0.5dB on the REDS test set. However, the FGDC alignment used 7.8M parameters, accounting for almost half of all parameters. Replacing the FGDC alignment with the proposed patch alignment achieves competitive results without introducing additional parameters – our method saves 7.8M of parameters. This experiment illustrates the effectiveness of the proposed patch alignment method.\n\n| Method | Frames | Params (M) | REDS PSNR | REDS SSIM |\n|------------------------------------------------|--------|------------|-----------|-----------|\n| BasicVSR++ [3], The baseline model | 6 | 7.3 | 31.38 | 0.8898 |\n| Flow-guided Deformable Alignment + Transformer | 6 | 18.6 | 31.89 | 0.8967 |\n| Patch Alignment + Transformer | 6 | 10.8 | 31.88 | 0.8964 |", " We thank Reviewer #2 for his/her comments and the appreciation of our work. To the concerns expressed by Reviewer #2 in `Weaknesses` and `Questions`, our response is as follows:\n\n`R2-Weakness 1`: Thanks for your comment. This paper argues that \"VSR Transformers can **DIRECTLY** utilize multi-frame information from **UNALIGNED** videos\". Here, we highlight two important conditions in this argument. VRT and VSRT both use a complex alignment design to align content in different frames. Although they can also utilize information from multiple frames to a certain extent, this exploitation is not DIRECTLY exploited from UNALIGNED video frames.\n\nFor the VRT model, we want to emphasise that its alignment module takes 1/3 of all the parameters. With this alignment module, the VRT is not taking advantage of the Transformer's ability to handle misaligned frames. The loss of sub-pixel information caused by this alignment module may further decrease the performance of the VRT. In other words, the design of the VRT does not benefit from the insight described in this paper.\n\nFor the VSRT model, it uses a token size of $8\\times8$. In the VSRT, self-attention is calculated between different tokens. But within the $8\\times8$ token, only CNN and MLP participate in the calculation. For VSRT, the function of utilizing multi-frame information is mostly done by CNN and MLP within the token, rather than self-attention computed between tokens. If the $8\\times8$ token is not well-aligned, the CNN and MLP cannot handle unaligned video frame tokens, and self-attention between tokens cannot help improve this.\n\nTherefore, the situations of VRT and VSRT do not conflict with the argument of this paper. On the contrary, both two cases demonstrate the value of our work. The knowledge and insights presented in this paper can help us understand the problems that still exist in VRT and VSRT design. It can also inspire future VSR models, such as the patch alignment method proposed in this paper.\n\n`R2-Weakness 2`: Thanks for the comment. We agree that making absolute assertions such as \"all alignment is harmful\" may be too strong, since we can't try all existing and future alignment methods. But our claim is that \"EXISTING alignment methods are SOMETIMES harmful to VSR Transformers\". To support our claim, we select existing alignment methods that are recognized for their generality and performance, i.e., image alignment which is a classic and intuitive alignment method and is used in many VSR models; feature alignment which is also well-studied and is the basis of many other alignment methods; and deformable convolution based alignment that supports the state-of-the-art VSR models such as EDVR, BasicVSR++. Our conclusions are validated by these representative EXISTING alignment methods. We found that in the range of motion that the VSR Transformers can already handle well, using alignment reduces the VSR performance it could otherwise achieve. In the case of large movements, we do not deny the role of alignment. In conclusion, we believe that our summary of the phenomena we have found is appropriate and not too strong.\n", " `R2-Q1`: Thanks for your question. First of all, the VSR Transformer architecture used in this research is based on a general image processing Transformer structure, and its effectiveness has been proven in many recent research works [21, 22, 45, 49, R1, R2]. This kind of Transformer structure contains a single-pixel token and window-based attention combined with a shift-window mechanism. In addition, all Transformer configurations used in our research are the most basic and extensive, thus ensuring our conclusions' generality.\n\nIn response to the reviewer's concerns, we did find that a structure similar to VSRT may not be within the scope of our discussion. The reason is already explained in `R2-Weakness 1`. If a large amount of spatial information (e.g., $8\\times8$ tokens) needs to be processed inside the token, self-attention cannot help extract multi-frame information. But this class of designs, e.g., VSRT and IPT, have proven to be inefficient for image processing tasks [21, 22, R1, R2]. Thus this issue does not affect the importance of our conclusions. We sincerely thank the reviewer for bringing this question to us, and we will add this discussion to the revised paper.\n\nAdditional References:\n\n[R1] Chen, X., Wang, X., Zhou, J. and Dong, C., 2022. Activating More Pixels in Image Super-Resolution Transformer. arXiv preprint arXiv:2205.04437.\n\n[R2] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R. and Van Gool, L., 2022. Recurrent Video Restoration Transformer with Guided Deformable Attention. arXiv preprint arXiv:2206.02146.\n\n`R2-Q2`: Thanks for the suggestion. Our local attribution map visualization results indicate that VSR Transformer implicitly estimates the motion between frames and inferences based on the moving objects. We believe that this capability of the Transformer is related to the computation of self-attention. In self-attention layers, we first calculate the attention matrix between the pixel tokens within the attention window. Pixels representing the same object may have higher attention scores. These scores guide the fusion between the features from the reference frame and the supporting frames. Thus, only related pixels are selectively calculated, which plays a similar role to alignment. We include a relative description in L250 of the main text. We will refine this part of the explanation in the revised paper.\n\n`R2-Q3`: Thanks for the question. We listed all the experimental details in Section 2 of the Supplementary Material. The experiments in Table 4 are implemented with a window size of 8 and can already achieve state-of-the-art performance. In fact, only one VSR Transformer experiment in Table1 uses a window size of 16. For the rest of the quantitative experiments, we all use a window size of 8. We will make this clearer in the revised paper.\n\n`R2-Q4`: Thanks for the question. In patch alignment, we use the estimated optical flow to find the best matching patch, rather than a similarity metric. For the problem of \"different illumination and scales\" mentioned by reviewer #2, the other alignment methods based on motion estimation will also encounter challenges, not just patch alignment. Also, since patch alignment uses the averaged motion vector within the patch to find the corresponding one, the averaging operation can introduce additional robustness. So patch alignment is affected by this situation less than other methods. The excellent performance of our method also speaks to the same conclusion.\n", " `R2-Q5`: Thanks for the question. The test set of Vimeo-90K is divided into three test sets, slow, medium, and fast. But according to our statistics, even the Vimeo-90K fast dataset's motion is far from the REDS test set's motion. The relevant statistical histogram is shown in Figure 2, and its specific measurement method is described in Section 2 of the Supplementary Material. As can be seen that the motion in the Vimeo-90K dataset is generally small and movement magnitudes of 99% pixels are less than 10 (for each clip, we measure the motion of the 4th and the 7th frames). Differently, there are large motions in the REDS dataset. There are at least 20% of pixels that have movement magnitudes larger than 10 (for each clip, we measure the motion of the 3rd and the 5th frames). Therefore, if we want to compare the performance of the patch alignment method under large motion conditions, the performance on the REDS dataset is more convincing. We paste some comparisons here for quick reference. Note that our largest model PSRT-recurrent was not completely trained at the time of submission. The performance we paste here is the final performance of our model. RVRT [R2] is the state-of-the-art VSR model released days before. As can be seen that we achieve state-of-the-art performance on the REDS dataset. When trained using 7 frames, we achieve the best performance on the Vid4 test set and comparable performance on the Vimeo-90K test set. When trained using 14 frames, we achieve the best performances on both Vid4 and Vimeo-90K test sets. The experimental results demonstrate the superior performance of the proposed patch alignment method. We will update these results in the revised paper. \n\n| Method | Frames (REDS/Vimeo) | Params (M) | REDS (PSNR/SSIM) | Vimeo-90K (PSNR/SSIM) | Vid4 (PSNR/SSIM) |\n|----------------|---------------------|------------|------------------|-----------------------|------------------|\n| BasicVSR++ | 30/14 | 7.3 | 32.39/0.9069 | 37.79/0.9500 | 27.93/0.8425 |\n| VRT | 16/7 | 35.6 | 32.19/0.9006 | 38.20/0.9530 | 27.93/0.8425 |\n| PSRT-recurrent | 16/7 | 13.4 | 32.72/0.9106 | 38.15/0.9529 | 28.03/0.8496 |\n| RVRT [R2] | -/14 | 10.8 | - | 38.15/0.9527 | 27.99/0.8462 |\n| PSRT-recurrent | -/14 | 13.4 | - | 38.27/0.9536 | 28.07/0.8485 |\n\n`R2-Q6`: Thanks for the question. We paste some of the data here for a quick reference. We calculate the average FLOPs of our method and some existing methods. This FLOPs is calculated using LR frames with size $180\\times320$. We also record their average runtime of them. As can be seen, the number of parameters of our method is less than other Transformer methods. One of the reasons is that our method saves a lot of parameters on the alignment module. Our FLOPs and runtime are also within a reasonable range. As the acceleration and optimization of Transformers are still to be studied, we believe that given our relatively small FLOPs, there is room for further optimization of the runtime of our method. For the training time, only VRT reports their training time. Our method's training time and cost are roughly the same compared with VRT. We will include these results and a discussion of FLOPs and runtime in the revised paper. \n\n| Method | Training Days | Parameters (M) | FLOPs (T) | Runtime (ms) |\n|----------------|---------------|----------------|-----------|--------------|\n| DUF | - | 5.8 | 2.34 | 974 |\n| RBPN | - | 12.2 | 8.51 | 1507 |\n| EDVR | - | 20.6 | 2.95 | 378 |\n| VSRT | - | 32.6 | 1.6 | - |\n| VRT | 15 | 35.6 | 1.3 | 243 |\n| PSRT-recurrent | 18 | 13.4 | 1.5 | 812 |", " We thank Reviewer #1 for his/her insightful comments. Our responses are as follows: \n\n`R1-Q1`: Thanks for your question. The VRT paper does not directly demonstrate our argument that \"VSR Transformers can directly utilize multi-frame information from unaligned videos\". On the contrary, a complex alignment module remains in the design of the VRT, and this alignment module takes 1/3 of all the VRT parameters. With this alignment module, the VRT is not taking advantage of the Transformer's ability to handle misaligned frames. The loss of sub-pixel information caused by this alignment module may further decrease the performance of the VRT. In other words, the design of the VRT does not benefit from the insight described in this paper. \n\nThis question raised by reviewer #1 also demonstrates the value of our work. There are three reasons. (1) Contrary to existing explanatory speculation, our work demonstrates the Transformer's superior ability to handle misaligned video frames with well-designed experimental analyses. Previous practices such as VRT and VSRT, although possibly involving similar network structures, were performed on aligned videos. (2) Our work shows through experiments and illustration why existing alignment methods are sometimes harmful to VSR Transformers. This not only guides our development of the patch-alignment method but also guides the design of other VSR methods in the future. (3) Our work inspired the invention of the patch-alignment method. The patch-alignment method does not use any complex advanced deformable convolution and other techniques, but frees VSR Transformers from the heavy complex alignment module design in a very simple way. Note that the alignment module occupies nearly 1/3 of the parameters of the VRT. \n\n`R1-Q2`: Thanks for the question. We respond to your sub-questions one by one: \n\nFirst, we do NOT argue that no alignment is better in the case of small motion because the feature after alignment exceeds the attention window. We argue that, if we use alignment, its inaccurate optical flow and interpolation warping method is the source of the performance loss. In the range that the VSR Transformer can handle, no alignment does not introduce the sub-pixel information loss caused by alignment. Thus, even if the objects are within the attention window in both cases, no alignment loses less information, and achieves better performance. \n\nSecond, you asked about more evidence. However, the PSNR performance comparison is the most intuitive illustration of our argument. Our experimental design precisely controls the variables, it can reflect the effect of alignment under different motion conditions. Reviewer #1 is encouraged to check the experiments in Table 2 and Table 3. These experiments verify the negative impact of interpolation methods on information utilization and support the above argument. \n\nThird, we respectfully disagree with the reviewer's statement of \"direct interactions with its neighbouring $12\\times12$ pixels\". In a self-attention layer, only the pixels in the $8\\times8$ attention window will directly calculate the attention of each other. Although due to the shift-window mechanism, certain pixels will calculate the attention within another $8\\times8$ window in the next layer. However, between the pixels of the non-overlapping parts of the two attention windows, the attention is not calculated directly but indirectly via the shift-window transfer mechanism. Beyond the $8\\times8$ range, the VSR Transformer cannot directly compute the correlation between two pixels. So in the case of the movement beyond this range, the performance of no alignment drops significantly. \n", " `R1-Q3`: Thanks for the question. Although the VSRT model presented in [3] has global attention, two designs limit its performance. First, the VSRT uses image warping alignment. According to our conclusion, alignment can corrupt sub-pixel information carried in video frames, resulting in a performance drop. Second, the VSRT uses a token size of $8\\times8$. In the VSRT, self-attention is calculated between different tokens. This calculation is free of the indicative bias of locality. But within the $8\\times8$ token, only CNN and MLP participate in the calculation. This calculation is subject to locality bias. If the $8\\times8$ token is not well-aligned, the CNN and MLP cannot handle unaligned video frame tokens, and self-attention between tokens cannot help improve this. \n\nThe situation mentioned by the reviewer is more relevant to using a larger window in our framework. We also present these experiments in Table 1. We copy some of the results here for quick reference. We can see that the improvement of the VSR Transformer with a larger window is limited on the Vimeo-90K benchmark, because the window size of 8 can already handle the motion of Vimeo-90K according to Figure 2. On the contrary, using a larger window can bring significant improvement for the REDS dataset because it can handle a wider range of motion. It is reasonable to speculate that if a VSR Transformer with a global window is built according to the method described in this paper, its performance on REDS will be further improved. But this attempt is uneconomical in engineering, especially when the proposed patch alignment can already solve this problem to some extent. \n\nWe thank the reviewer again for bringing this question to us. We will add a discussion about the situation of the VSRT model in the revised version. \n\n| Method | Alignment | Window Size | Vimeo-90K | REDS |\n|-----------------|-----------|-------------|----------------|----------------|\n| VSR Transformer | No | 8 | 37.43 / 0.9470 | 30.56 / 0.8696 |\n| VSR Transformer | No | 16 | 37.46 / 0.9474 | 30.81 / 0.8745 |\n\n`R1-Q4`: Thanks for the question. FGST [24] is an excellent concurrent work. We learned a few days before our submission that FGST was accepted by ICML. We congratulate the authors of FGST.\n\nBack to the reviewer's question, although FGST and we use the same words in some expressions, we have some major differences. FGST expresses their method as finding similar patches in the supporting frame, but it is implemented by pixel-wise warping using NN resampling at the deep feature. Due to its multi-scale network design, each pixel in the deep feature corresponds to a patch region in the original image. Aside from the difference in multi-scale design, FGST is somewhat equivalent to NN warping on deep features. However, our proposed method manually divides the patch on the image and keeps the pixels within the patch untransformed during alignment. We emphasize that it is important to maintain invariant relationships between pixels within a patch, which FGST does not demonstrate in this regard. We include a comparison of this approach in Table 2. We copy the results below for quick reference. It can be seen that in the absence of explicit patch partitioning, only using NN warps on deep features such as FGST provides the best performance among existing alignment methods. However, the proposed image level patch alignment can already produce the same performance, and feature level patch alignment performs better than their FGST method.\n\n| Method | Align. Position | Resampling | REDS (PSNR) | REDS (SSIM) |\n|-------------------|-----------------|------------|-------------|-------------|\n| Feature alignment | Feature | Bilinear | 31.06 | 0.8792 |\n| Feature alignment | Feature | NN | 31.11 | 0.8801 |\n| Patch alignment | Image | NN | 31.11 | 0.8800 |\n| Patch alignment | Feature | NN | 31.17 | 0.8810 |\n\nOn the other hand, our findings can also explain the above experimental results. For the NN warping, after the optical flow is rounded, the values of most of the flat regions (where optical flow does not change drastically) become the same integer. At this point, these regions are no longer subject to the loss of sub-pixel information from inaccurate optical flow and interpolation. Therefore, the performance will be improved compared to other methods. And this operation can cause problems in some areas where the optical flow changes drastically. Patch alignment avoids the problem of the NN warping failure in such areas by dividing patches and keeping the patch pixels unchanged. This not only justifies the proposed patch alignment method, but also illustrates the practical value of our analytical work (Section 4). These findings both explain existing methods and inform new ones.\n\n\n", " `R1-Q5`: Thanks for the question. The [1] citation of the main text is the Layer Norm paper, and the citation [1] of the supplementary material is VSRT. Here we will answer the reviewer's question based on VSRT. In `R1-Q3`, we discussed the differences between the VSR Transformer architecture used in our work and VSRT. The patch in VSRT refers to treating the $8\\times8$ patch as a whole and as a token. There is no self-attention operation inside their patch token, only the operation of CNN and MLP. Their self-attention operations are performed between different patch tokens. Our method is to divide the $8\\times8$ patch into an attention window. In each attention window, we use each pixel as a token and perform self-attention operations between different pixel tokens. The implicit alignment described in this paper is performed between tokens that participate in self-attention. There is no self-attention inside the token, so alignment is still required for $8\\times8$ token in the VSRT.\n\n`R1-Q6`: Thanks for the question. We argue that our results are not contradictory. Warping with NN resampling is equivalent to rounding the optical flow to a certain extent. At this time, no matter whether the optical flow is smooth or not, the difference after rounding will not be too large. And what we describe as \"smooth optical flow is better\" holds in the case of warping with bilinear resampling. In this case, the smooth optical flow introduces less high-frequency noise. There are also fewer cases where adjacent pixels are warping with very different optical flows. This helps preserve sub-pixel information for high performance.\n\n`R1-Q7`: Thanks for the question. We only do the patch alignment operation once in the proposed method. After the shift operation, discontinuities between patches in the supporting frame will appear within the scope of self-attention, as shown in Figure 8. Indeed, preserving this discontinuity in the supporting frames is not intuitive. But we have found that this simple method already gives a good result. This shows that the information retained by our patch alignment is very important. We also retain the possibility to further improve the performance of patch alignment through more complex designs.\n\n`R1-Q8`: Thanks for your comment. We also believe that the conclusions about image alignment and feature alignment are not new. So we do not claim that this discussion is one of the contributions of this paper. The findings of this paper are consistent with this widely accepted conclusion. Our work provides an understanding of this view. Feature alignment extracts these patterns before sub-pixel information is corrupted by alignment, and produces better performance.\n\n`R1-Q9`: Thanks for your suggestion. We calculate the average FLOPs of our method and some existing methods. The FLOPs results are calculated using LR frames with size $180\\times320$. We also record their average runtime. The results are shown below. As can be seen, the number of parameters of our method is less than other Transformer methods. One of the reasons is that our method saves a lot of parameters on the alignment module. Our FLOPs and runtime are also within a reasonable range. As the acceleration and optimization of Transformers are still to be studied, we believe that given our relatively small FLOPs, there is room for further optimization of the runtime of our method. We will include these results and a discussion about FLOPs and runtime in the revised paper.\n\n| Method | Parameters (M) | FLOPs (T) | Runtime (ms) |\n|--------|----------------|-----------|--------------|\n| DUF | 5.8 | 2.34 | 974 |\n| RBPN | 12.2 | 8.51 | 1507 |\n| EDVR | 20.6 | 2.95 | 378 |\n| VSRT | 32.6 | 1.6 | - |\n| VRT | 30.7 | 1.3 | 243 |\n| Ours | 13.4 | 1.5 | 812 |\n\n`Other Comments`: Finally, we thank Reviewer #1 for his/her insightful comments and the reviewer's efforts in reviewing this paper.", " This paper studies the alignment problem in video super-resolution. It conducts a series of experiments to show that attention plays a key role for the problem. Using larger window size for unaligned videos can also achieve good performance. Besides, to reduce computation cost, it proposes a patch alignment strategy that aligns different patches. It achieves competitive performance on multiple video super-resolution benchmark datasets.\n Strengths:\n\n1, interesting finding on the relationship between attention window size and motion magnitude.\n\n2, a patch alignment strategy that crops and aligns patches rather than pixels\n\n\n\n\n\nWeakness:\n\n1, Previous attention-based method [21] proposes to use attention for implicit alignment, along with which feature alignment is used to deal with misalignment outside of the attention window. The key idea seems similar: attention can deal with misalignments.\n\n2, When the motion is small, the feature after alignment is largely still within the attention window. Why no alignment is better for small motions? Is there any more evidence (except for the final PSNR comparison)? Besides, due to the shifted window strategy, one pixel should have direct interactions with its neighbouring 12x12 pixels, why no alignment is worse when pixel movement magnitude is larger than 8 in Fig. 4 (a)?\n\n3, If alignment is unnecessary, global attention (largest window size) should achieve best performance. Why does the PSNR of [3] (using global attention) is far from good? How about the performance of using global attention for the proposed framework?\n\n4, The patch alignment strategy is very similar to [24], which also crops patches from image features. [24] is even more flexible than this paper, as it samples several patches at a time. Attention is used in a similar way.\n\n5, The patch alignment strategy is also very similar to [1] that samples tokens (patches) for attention according to optical flow.\n\n6, Contradictory experiments results. When using nearest sampling in warping, the optical flow cannot be updated during training as there is no gradient. According to your conclusion, using an updated smooth flow is better than using the original pre-trained flow.\n\n7, If we need to find corresponding patches for each window, do we need to do it again when the windows are shifted?\n\n8, Some conclusions are not new. For example, the superiority of feature warping to image warping is already well-studied in previous works.\n\n9, No runtime and FLOPS comparison. See weaknesses. The authors have discussed the limitations and potential societal impact.\n", " This paper rethinking the alignment in video super-resolution (VSR) Transformer and has two observations. (i) VSR Transformers can directly utilize multi-frame information from unaligned videos, and (ii) existing alignment methods are sometimes harmful to VSR Transformers. Based on these two observations, the authors remove the alignment in VSR, and propose a patch alignment to reduce the computational burden. Many experiments demonstrate the state-of-the-art performance on REDS. Strengths:\n1. The authors conduct extensive experiments to explore the role of the alignment in VSR and have two observations.\n2. The proposed VSR Transformer method achieves the state-of-the-art performance on multiple datasets.\n\nWeaknesses:\n1. For the first observation, VSR-T and VRT are able to utilize multi-frame information.\n2. The second observation \"existing alignment methods are sometimes harmful to VSR Transformers\" is two strong. 1. The authors state that \"existing alignment methods are sometimes harmful to VSR Transformers\". The importance of the alignment has been proved in most existing VSR methods. The performance of the alignment highly depends on the specific architecture of VSR Transformers. The observation may be only suitable for the proposed Transformer in this paper.\n\n2. The proposed VSR Transformer implicitly conduct the alignment and track the motion between frames. The authors provide visualization of local attribution map to highlight the pixels in each frame. It would better to explain from mechanism why the proposed VSR Transformer implicitly can align the feature in each frame.\n\n3. Recently, some studies demonstrate that large window size has better performance. In Table 4, the performance gain on REDS4 may mainly come from the large window size of 16x16. To verify the effectiveness of the patch alignment. It would better to report results with the window size of 8 in the supplementary. \n\n4. Patch alignment is sensitive to the similarity metric. When the adjacent frames has different illumination and scale, the performance of patch alignment may drop. How to relieve this issue in the experiment.\n\n5. The authors state that the proposed method can del with large motions. However, the performance is worse than VRT on Vimeo-90K-T which has different degrees of motions. Vimeo training set has 7 frames\n\n6. How many days for training PSRT-recurrent on the RED dataset? Besides, it would be better to report the inference time. Yes", " This work explores the alignment modules in Transformer-based video super-resolution (VSR) methods. The research begins with experiments finding that VSR Transformers can directly utilize the unaligned videos and existing alignment methods can harm VSR Transformer. To solve this problem, this paper proposes a new and efficient alignment method named patch alignment, which aligns image patches instead of pixels. The experiments show the proposed method achieves state-of-the-art performance. Strengths:\n1) The paper is easy to follow. The motivation is clear and straight, and the main problem is well defined.\n2) The problem of alignment for VSR Transformers is interesting, this paper shows VSR Transformers can utilize the unaligned videos.\n3) The performance is state-of-the-art, the proposed method achieves the first or second best performance on each benchmark.\n4) The method is efficient, there are not many extra parameters and computations introduced.\n\nWeakness:\n1) The main weakness seems to be the limited novelty and contribution. The main contribution of this work is the patch alignment method, but the method of aligning the patch instead of pixel seems too simple. The problems and solutions presented are relatively narrow, and I'm afraid this doesn't contribute much. The problem and solution proposed are relatively narrow, and this may not contribute much.\n2) The experiment is not very clear, I see that the VSR Transformers with patch alignment method achieves good performance, but I cannot find how much improvement the patch alignment brings compared with the baseline method.\n My main concern of this paper is the novelty and contribution. The problem of alignment does not seem broad and relevant, and I'm not sure if this research will contribute enough to the field. Another concern is that I did not find a clear experiment of how much improvement the patch alignment method brings compared to the baseline method, so I’m not sure if it's efficient enough. Yes", " This paper rethinks alignment in video Super-Resolution Transformers, demonstrates that existing alignment methods are unnecessary or even harmful to the VSR Transformer, and will cause damage to the sub-pixel information, and proposes a simple but effective alignment method. Strengths\n1.\tThere is a certain novelty to this work, reconsidering the alignment in video super-resolution Transformers, while other methods retain the complex alignment module\n2.\tThe paper makes a reasonable analysis of the experimental results, proposes that the existing alignment methods are unnecessary or even harmful to the VSR Transformer, and will cause damage to the sub-pixel information.\n3.\tProposed a effective alignment method --- patch alignment, and experimental results show that this method achieves state-of-the-art VSR performance.\n\nWeaknesses\n1. Details about patch alignment are unclear. Section 5.1 could have further additions, in particular how to find the corresponding patches in the supporting frames for each patch, and how to move the patch to the corresponding position.\n\n 1.\tIt’s not clear about the explanation of Figure 4,how to get the result of Figure 4,especially Figure 4(c).\n2.\tThere should place one line space before the figure caption and one line space after the figure, the layout of the figure is too crowded. This paper use bicubic interpolation to produce LR video frames. It could be further discussed whether the experimental results will be affected by different degradation processes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "_T6hGQ9Cvc3", "szaeuSzPZ98", "lYHVsy3f5jlg", "7clqEYpHnNa", "H0ILTT3guth", "gzzvuCpTxq7", "szaeuSzPZ98", "nips_2022_NgIf3FpcHie", "7clqEYpHnNa", "H0ILTT3guth", "H0ILTT3guth", "gzzvuCpTxq7", "gzzvuCpTxq7", "gzzvuCpTxq7", "szaeuSzPZ98", "szaeuSzPZ98", "szaeuSzPZ98", "nips_2022_NgIf3FpcHie", "nips_2022_NgIf3FpcHie", "nips_2022_NgIf3FpcHie", "nips_2022_NgIf3FpcHie" ]
nips_2022_pGcTocvaZkJ
Censored Quantile Regression Neural Networks for Distribution-Free Survival Analysis
This paper considers doing quantile regression on censored data using neural networks (NNs). This adds to the survival analysis toolkit by allowing direct prediction of the target variable, along with a distribution-free characterisation of uncertainty, using a flexible function approximator. We begin by showing how an algorithm popular in linear models can be applied to NNs. However, the resulting procedure is inefficient, requiring sequential optimisation of an individual NN at each desired quantile. Our major contribution is a novel algorithm that simultaneously optimises a grid of quantiles output by a single NN. To offer theoretical insight into our algorithm, we show firstly that it can be interpreted as a form of expectation-maximisation, and secondly that it exhibits a desirable `self-correcting' property. Experimentally, the algorithm produces quantiles that are better calibrated than existing methods on 10 out of 12 real datasets.
Accept
This paper studies the quantile regression of censored data. Neural network models are used as the statistical model. Numerical results show the proposed algorithm is computationally efficient and attains high prediction accuracy compared to existing methods. Since reviewers agree that this paper is well written and interesting, I recommend accepting the paper.
train
[ "5BhuO_Ls3O", "DKzGbbL5kge", "A2GINEwU7d", "VrFuvD0nTJp", "X0RUh2mGs2o", "lw_XjbGnoRM", "NpIXPgaDMSMe", "uro6jvy6TPM", "lUWNEUBmFX", "VFh4wmBzd3i", "paJ5iGtnptN", "hTX64kd6KXh", "ywvkrYAnsf", "-TJhni2iFfj", "3VrjEXkhPja" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for pushing on this. After another look, we agree that Algorithm 1 \\& 2 should use $\\hat{\\mathbf{w}}$ (it conflicts as is), and it would be better to be consistent in the text about whether we refer to estimates or true quantities (e.g. line 117 should use estimates). As per your suggestion, dropping the bar notation, $\\bar{w}_j \\to w_j$, seems the simplest way to resolve this. We will update this notation in the next version of the paper. ", " Any notation is fine if it is consistently used. \nWhen you write $w_j = \\frac{\\tau - q_j}{1- q_j}$ , you say that $q_j$ is the quantile at which the data point was censored, so I understand that $q_j$ it is the true quantile, so $w_j$ should be the true weight.\nIn Algorithm 1, the estimate $\\hat{\\bf{q}}$ is used to compute $\\bf{w}$, so here $\\hat{\\bf{w}}$ would make more sense since it is an estimate too.\nTherefore, I think that the additional notation $\\bar{w}_j$ for the true $w_j$ is not really needed.\nWhen you treat the weights as latent variables instead of a fixed unknown parameters (Section 4), I think you can still use $\\bf{w}$.\n\n", " Thank you for taking the time to go over our rebuttal. We are delighted to hear we were able to address your main reservation. ", " Thank you for your response and updated score. We really appreciate your open-mindedness in this process.", " I thank the authors for taking the time to respond to my questions. Their answer is detailed, and adresses my main reservation (as outlined in my initial review). As a result of this, I am raising my score to a 6 (from a 5 initially).", " I thank the authors for detailed replies. The weakness in my review are still there. But I think the novelty of the paper outweighs the weakness. So I raised my score. I thank the authors for the clarification but the problem is still there.\n\n1. Using Log Normal MLE as a baseline. Adding more baselines will make your work more convincing. But I am okay with lognormal as a baseline as what you said it's a baseline that has distributional assumptions.\n\n2. Convergence guarantees for CQRNN. I understand this is hard. But this makes me not giving a high score and makes the algorithm mysterious.\n\n3. Quantile regression vs. full distribution modeling. The reply is convincing.\n\n", " Thanks to all of you for your insightful reviews, which have guided us in improving the paper substantially. We have now uploaded a new version (main and appendix now joined in one pdf). Notable changes include:\n* Improved comparison of CQRNN with the sequential grid. We've refreshed Table 3 and 5, repeating over multiple random seeds, have added error bars, and implemented tests of statistical significance. We've also added similar analysis on real datasets in Table 6.\n* We have included new analysis in Appendix Figure 5, comparing the partial optimisation performed by CQRNN, with a full optimisation procedure that is closer to the typical EM algorithm.\n* Related work has been significantly extended. Thanks to you all for pointing us to important prior work.\n\nWe have responded to each of your reviews individually. We are happy to engage in continued discussion during the discussion period.", " Thank you for taking the time to review our paper. We are pleased you see novelty in our work, and found it easy to follow. Below, we have clarified the goals and utility of quantile regression, and respond to your specific concerns. We apologise if the paper was not clear on these points -- we have begun updating sections to rectify this. We hope that our comments help highlight the value of our contributions, and may warrant a re-evaluation of your score.\n\n__Quantile regression vs. full distribution modelling.__ \nQuantile regression allows prediction of a distribution at a finite set of quantiles. A major benefit is that one avoids assuming any specific form of distribution between these quantiles. A drawback is that one has access to this distribution only at this fixed set of quantiles, and not to the full distribution. We agree with you that for certain problems, predictions at this set of fixed quantiles may not be sufficient, and a full predictive distribution should be sought -- here we may not recommend CQRNN.\n\nNevertheless, having predictions at multiple quantiles can be more useful than having a point estimate prediction, since one has a measure of the uncertainty. This is often summarised as a prediction interval, say between $\\tau = 0.1$ and $\\tau = 0.9$. \nFor example, knowing a patient has 10 years $\\pm 1$ year to live, compared to knowing they have 10 years $\\pm 8$ years, might greatly impact decisions around medical care and financial planning.\n\nIt is with prediction intervals and point estimates in mind that we chose to evaluate many metrics at $\\tau \\in [0.1, 0.5, 0.9]$. These $\\tau$ values were available across all grids we tested, allowing comparison of model quality across different values of $M$. \nOur metrics on calibration are assessed across all quantiles.\n\nIt is incorrect to say that Portnoy’s method allows access to an infinite set of $\\tau$'s. Their method is equivalent to our sequential grid algorithm, which uses a predefined fixed set of $\\tau$'s. \nHowever, if one is determined to move from the set of predefined quantile predictions to predicting at intermediate quantiles, an obvious approach is to linearly interpolate between two adjacent predicted quantiles. This technique can be used with both CQRNN and Portnoy's linear models. (We touch on this in B.1.) But doing this implicitly makes a distributional assumption, which was what quantile regression tried to avoid!\n\n__Using Log Normal MLE as a baseline.__ \nSurvival data typically have skewed distributions with long tails which suit certain parametric distributions. From a popular textbook: \"Examples of distributions that are commonly used for survival time are: the Weibull, the exponential, the log-logistic, the log-normal, and...\" [4] (page 292). Hence, we said Log Normal \"suits properties of real-world time-to-event survival data\".\n\nLogNormal MLE has been used by recent works on NNs and survival analysis as a baseline [1, 2, 3], which we believe made it important to for us to include. We felt it was interesting to compare the distribution-free approach taken by CQRNN, with a model that commits to an explicit form of distribution. We did not run into trouble with its optimisation.\n\nWe agree with you that other methods may also make relevant baselines. We focused most of our efforts on comparing to quantile-based methods, which are aligned with CQRNN's objective. If you feel strongly about including a particular baseline, let us know and we can try to implement this. Thanks also for making us aware of SODEN, which we were unfamiliar with, and have cited.\n\n__Convergence guarantees for CQRNN.__ \nWe did attempt to derive convergence guarantees for linear models, but this turns out to be difficult! \nA challenge in analysing CQRNN is that there is a complex feedback loop between the quantile estimates, which affect the loss function, which affects the optimisation, which affects the new quantile estimates... \nHence, even when model is linear, the algorithm's behaviour is not.\nSo while we could study how individual quantile estimates, $\\hat{y}_{j, \\tau}$, moved during optimisation given weights (Theorem 2), we were unable to analyse how these movements affect future iterations of the algorithm. The EM analogy does provide insight into how this feedback works.\nSince the objective of our paper is showing applicability of the algorithm to NNs and not linear models, we avoided committing too much effort to studying the linear case. The EM analogy and the self-correcting property presented both hold for general non-linear models. \n\n[1] X-CAL: Explicit calibration for survival analysis, Goldstein et al., NeurIPS 2020 \n[2] Countdown regression: Sharp and calibrated survival predictions, Avati et al., UAI 2019 \n[3] Modeling Progression Free Survival in Breast Cancer with Tensorized Recurrent Neural Networks and Accelerated Failure Time Models, Yang et al., 2017 \n[4] Survival Analysis, A Self-Learning Text, Kleinbaum \\& Klein, 2012", " Thank you for your review. Allow us to respond to your queries below. We have updated the results table at your request with error bars and run a test of statistical significance. We have additionally extended this analysis to the real datasets. We hope this demonstration of CQRNN offering statistically significant improvements might justify an increase in your evaluation of the paper.\n\n__Performance gains of CQRNN vs. sequential grid (SG).__ \nThanks for your suggestion, and please accept our apologies for not including error bars originally. We have repeated the experiments in Table 1 (highlights five synthetic datasets) and Table 5 (which reports all 14 synthetic datasets) and reported 95\\% confidence intervals in the updated paper version. CQRNN significantly outperforms SG on 8/14 datasets, while SG significantly outperforms CQRNN on 3/14 datasets, and there is no statistically significant difference (when the 95\\% confidence intervals include 0.0) on 3/14 datasets. One instance of lower CQRNN performance is due to a large number of censored datapoints falling below the first quantile (see $^*$), and disappears with a larger grid size $M$.\n\nIn addition to this comparison on the synthetic type 1 datasets, we felt it was important to evaluate whether CQRNN's performance boost also held on real data. We therefore have also run a head-to-head comparison of CQRNN vs. SG on type 3 datasets, included in our brand new Table 6. In terms of C-Index, CQRNN outperforms SG on 4/7 datasets while SG outperforms CQRNN on 2/7 datasets. In terms of calibration, CQRNN outperforms SG on all 7/7 datasets. \n\nWe believe that this improvement in quantile accuracy, combined with an order of magnitude saving in terms of training time, test time, and parameter count, makes CQRNN preferable to SG for any researcher or practitioner applying quantile regression using NNs, to survival data. Please let us know whether you agree with this belief, and if not, whether you would like to see any more experiments as further evidence?\n\n__Contribution of applying the sequential grid algorithm to NNs.__ \nWhilst applying Portnoy’s estimator to NNs might seem an obvious thing to try from the way the paper is presented, this has not previously been done, and we wanted to ensure our paper is credited for this contribution, as well as development of CQRNN. A lot of our research effort was spent combing the survival literature and experimenting with the various estimators to select one that could be adapted to NNs. Even the SG algorithm we used (presented in Algorithm 1) was modified from the original paper. Specifically, we make the initialisation scheme more efficient than the original version, which is supported by our analysis in section C.2.\n\n__Adapting classical survival models.__ \nWe are in agreement that there are many great ideas in the classical survival literature, that could help guide NN design for survival analysis, and might also be combined with ideas from quantile regression. \nFeel free to reach out after the review process if you had a specific idea in mind to explore!\n\n \n\n$^*$We investigated the dataset where SG outperforms CQRNN by a large margin (Norm Uniform), finding that heavy censoring below the first quantile is the cause of the performance difference. SG initialises it’s quantile estimates $\\hat{\\mathbf{q}} \\gets 0.0$ (Algorithm 1), while the lowest available value for CQRNN $\\hat{q}_i $ is 0.1 when $M=9$ (since, $\\operatorname{grid}_\\tau \\in [0.1,0.2 ... 0.9]$). In the Norm Uniform dataset, most censored datapoints are closer to the 0.0 quantile than 0.1 (see Figure 3), so SG has a slight advantage. When $M$ is increased to 19, meaning the lowest available quantile value for CQRNN is 0.05, the performance of CQRNN and SG is not statistically different (included as an extra row in Table 5).\n", " Thank you for your positive review, and particularly your attention to detail. We respond to your comments below.\n\n__Discussion of quantile regression with NNs.__ \nWe agree that our related work section was missing a section covering quantile regression and NNs, and have added this to the paper.\nWe have included your suggested citations. \n\nAs we understand, CQRNN is compatible with those crossing quantile remedies. We extended our discussion of this in appendix B.1, which describes the two methods we already trialled. We remain surprised that these crossing-quantile methods did not deliver empirical gains on our assessed metrics, and hope this might be explored in future work.\n\n__Algorithm 1, $\\tau$ query.__ \nThanks for pointing this out, actually the line $\\tau \\gets \\operatorname{grid}_\\tau[i-1]$ should have been written $\\tau \\gets \\operatorname{grid}_\\tau[i]$ (otherwise this throws an error when $i=0$). We have updated this.\n\n__Capitalisation of references.__ \nWe note the NeurIPS style guide also recommends capitalising these words, and have updated the paper with this.\n\n__$w_j$ notation.__ \nWe introduced, $\\bar{w}_j$ \\& $\\hat{w}_j$, in order to distinguish from $w_j$, which is used in a more general sense throughout the text to refer to censored weights without being specific if it refers to estimates or ground truth. Our notation also follows Portnoy’s own (their Eq. 14, 16). We have not edited this in the paper for now -- let us know if you have a preferred notation scheme.", " Thank you for your review, we are pleased you were able to understand our work with full clarity and appreciated the value of our contributions. We respond to your questions below.\n\n__Extend related work to highlight pros and cons of CQRNN.__ \nWe have expanded the related work section to discuss when CQRNN might be preferred compared to parameterised distributions (e.g. our LogNorm MLE baseline), and the advantages and disadvantages of CQRNN compared to censored quantile linear models. Please let us know if you feel any further discussion might be helpful. \n\n__Investigation of the effect of partial vs. full maximisation.__ \nThis is a very interesting point. In fact we originally anticipated that the algorithm would have to fully optimise the NN before updating the $\\hat{q}_i$’s, and repeating. Initial experiments then suggested it was possible to update the $\\hat{q}_i$’s online, after partial maximisation. This seemed to give fast convergence, was stable, and it also resulted in a simpler algorithm, which encouraged us to pursue this direction. \n\nInspired by your comment, we have added plots in appendix Figure 5 (also text in Section B.1 and 4.1), comparing partial optimisation as in Algorithm 2, with variants that do a fuller optimisation before updating quantile estimates, $\\hat{q}_i$. These results were run on our synthetic Normal Linear dataset. We observe that the partial optimisation as presented in CQRNN converges fastest.\n\n__Performance of DeepQuantReg when censoring is independent of covariates.__ \nAnother very good point! We spent some time investigating. One potential issue we noticed is that the estimator from Huang et al. [1] (which DeepQuantReg builds on) is designed for recovering only the median, $\\tau=0.5$, but DeepQuantReg directly applies it to other quantiles. We suspect there might be some undesirable bias introduced when applying this to quantiles away from the median. \nIn contrast, Portnoy's estimator is consistent across quantiles. This could explain the difference in performance of CQRNN and DeepQuantReg even when censoring is independent of covariates.\nCuriously, we also noted that [1] states they do not require independence of covariates and censoring. \n\n\nMore generally, Huang's and Portnoy's estimators are quite different approaches, with the former weighting observed datapoints only, while the latter splits censored data into two pseudo datapoints, and this in itself could lead to differing results. \n\n[1] Least Absolute Deviations Estimation for the Accelerated Failure Time Model, Huang et al. 2007", " The authors proposed a neural network approach for quantile regression on censored data, based on Portnoys's estimator. The authors also proposed a new algorithm to mitigate the computational challenge when combining neural networks and the existing sequential grid algorithm for Portnoys's estimator. In addition, the authors made the connection between the proposed CQRNN algorithm and the EM algorithm for further understanding and compared CQRNN with existing methods on both synthetic and real-world datasets. Strengths:\n* The proposed method is not just replacing the linear model with a neural network in the existing algorithm for Portnoy's estimator. They did identify new computational challenges when optimizing a neural network through the existing sequential grid algorithm, which was less problematic for linear models, and proposed a new CQRNN algorithm that achieves faster speeds and saves memory (while the improvement mainly depends on the size of the quantile grid).\n* The connection to the EM algorithm under certain distribution assumptions and the self-correcting property are very interesting. I have a related question below.\n* The authors provided solid numerical evaluations and comparisons with existing approaches both qualitatively and quantitatively, and in terms of both prediction accuracy and computation efficiency. \n\n\nWeaknesses:\n* It would be worthwhile to add more discussion in \"related work\" so as to explicitly highlight the improvement or limitation in comparison to other existing methods.\n* For partial maximization, the authors mentioned the alternative version that is closer to the standard EM algorithm. While in each iteration the proposed algorithm saves computational costs by taking a single gradient step, the number of iterations required for convergence might also change or even increase. Do the authors have any comments on this or have the authors investigated the impact of taking a single gradient instead of multiple gradient descents on the overall computing time/convergence?\n* From Table 2, the censoring distribution in LogNorm (and Norm uniform, etc.) in synthetic datasets seems to be independent of covariates. In this case, the assumption in DeepQuantReg also holds while CQRNN still performs better than DeepQuantReg. Can the authors explain this? \n Please see the weaknesses above. Yes, the authors discussed the limitations.", " This paper proposes a neural network based approach for doing quantile regression on possibly censored data, i.e. when the target variable is sometimes not directly observed and instead a lower or upper bound is known, as in the case of survival data.\nThe authors first build on the linear approach on Portnoy 2003 and extend it straightforwardly to neural networks. However this approach requires the sequential optimization of a new NN for each quantile level to be predicted, which is computationally heavy.\nThen a new method is proposed for simultaneous quantile regression , which is interpreted as a form of expectation maximization, allowing to estimate all the quantiles simultaneously.\n Strengths:\n- The paper is clearly written and the background on censored data and survival analysis is properly explained.\n- The analysis gives interesting insights and useful guarantees on the algorithm\n- The experiments are interesting and appropriate.\n\nWeakness:\n- Lacks a discussion on the recent literature on simultaneous quantile regression with NN\n\n\n \n I think it would be interesting to discuss and relate this work to the recent literature on simultaneous quantile regression with NN that alleviates or avoids the crossing quantile problem. In particular, it would be interesting to discuss how these methods could be adapted to censored data using your construction.\n\nA non-exhaustive list:\n- *Tagasovska, N., & Lopez-Paz, D. (2019). Single-model uncertainties for deep learning. Advances in Neural Information Processing Systems, 32.*\n- *Zhou, F., Wang, J., & Feng, X. (2020). Non-crossing quantile regression for distributional reinforcement learning. Advances in Neural Information Processing Systems, 33, 15909-15919.*\n- *Brando, A., Center, B. S., Rodriguez-Serrano, J., & Vitria, J. (2022, May). Deep Non-Crossing Quantiles through the Partial Derivative. In International Conference on Artificial Intelligence and Statistics (pp. 7902-7914). PMLR.*\n\nMinor comments:\nAlgorithm 1: since $\\tau \\leftarrow grid_\\tau[i-1]$, $\\hat{\\boldsymbol{q}}[K]$ and $\\hat{\\boldsymbol{q}}[\\neg K_{cross}]$ are assigned the same value, is this correct?\n\nAlgorithm X, Equation X, Appendix X,… should start with a capital letter\n\nTo avoid confusion, I think it is better to say quantile level for $q_j$\n\nLines 187-191: $\\bar{w}_j$ is the true weight, $\\hat{w}_j$ is the estimate, what is $w_j$ then?\n\nLine 324: which were the methods use to combat the crossing-quantile problem?\n\n Yes", " The paper considers the task of quantile regression in a survival analysis context, i.e. one where some examples are right-censored. As in almost all recent works, the encoder backbone is a neural network. The authors start by adapting the sequential grid (SG) algorithm to work with neural networks, as opposed to strictly linear models. Noting the inneficiency of the adaptation, they propose CQRNN, a novel algorithm which is more efficient. CQRNN departs from SG in that the assignment weights are updated during training.\nIn addition to introducing their proposed algorithm and presenting experimental results jusitying it's adoption, the authors provide a qualitative discussion of their algorithm. In particular, they frame it as an Expectation-Maximization algorithm, and provide a proof sketch as to why their proposed solution should result in \"sensible\" weight allocations. I. Strenghts\n- The paper is well-written, and reasonably clear. The discussion is easy to follow, and the relevant prior works necessary to understand the novel approach are made explicit.\n- The algorithm is clearly presented (good explanations + an explicit algorithm).\n- Experiments are well detailed: hyperparameter optimization of the competing methods was done (to a certain extent).\n\nII. Weaknesses.\n- As the authors note, the performance gains over vanilla sequential grid are minor (I am referring to table 1). In particular, it is hard to tell whether they are statistically significant without error bars or similar information. As the difference between the two is a major point (otherwise the contribution is incremental), the question I ask in the next section has the goal to clarify this point. - Could the authors re-do the comparison in table 1? I would like to confirm whether the purported advantage for their method is statistically significant.\n- Would it make sense to adapt classical survival models (the authors cite a few in their work) to this context by simply changing the target to be the quantile center? I am curious what would be the relative performance of these approaches: quite a few are very strong in their original field.\n\nEdit: The authors have responded to my questions. Taking this into account, I am raising my score. I don't see any potential negative societal impact for this work. If anything, survival analysis is a field that has the potential to *positively* impact medicine.", " The authors use neural networks to replace the linear models in censored quantile regression. And the authors further improve the sequential grid algorithm with bootstrap weights. The authors explain the soundness of their method from an analogy of EM. The authors compare their method with three baselines on synthetic and real dataset. Strengths:\n1. The method is novel. Introducing nn into quantile regression and the new bootstrap algorithm are both contributions to the society.\n2. The authors make some effort to explain why and how the bootstrap algorithm CQRNN works.\n3. The writing is clear. It's easy for me to follow the paper.\n\nWeakness:\n1. I appreciate that the authors explain how CQRNN work from an EM analogy and self-correcting property. But they still sound vague to me. Can we have a simpler example to understand it in a more rigorous way? For example, if the model is linear, how the convergence property is like using the CQRNN bootstrap algorithm? \n2. The empirical comparison does not seem to be thorough if the authors want to prove that their model is a good \"survival model\".\n- On synthetic datasets, the authors only compare the performance only on several $\\tau$'s (0.1, 0.5, 0.9). But for a survival model, why do we only care about certain $\\tau$'s? How do we get values at other $\\tau$'s that the quantile regression has not modeled? For a survival model, quantile at certain $\\tau$'s is not the only thing we care about. We may need to look at the whole survival distribution. And then we compare other scores, for example, Brier score.\n- The authors only consider lognormal model as their baseline and the reason is that they believe \"this distribution often suits properties of real-world time-to-event survival data.\" Why do you believe so? As far as I know, lognormal models have strict distributional assumptions and may not have good performance. It is also hard to train due to the exp transformation from normal. Other models like DeepHit, SODEN may have better performance on concordance or calibration.\n 1. Can we get a more concrete guarantee on the convergence of the propose algorithm? Some guarantee when the model is linear should also be good.\n2. Quantile regression seems to give us quantiles for certain $\\tau$'s . But for a valid survival model, we need the whole distribution. How do we get the whole distribution? Or why do we not need to whole distribution in survival modeling? In Portnoy (2003), $\\tau$ seems to be part of the function so we do not infinite amount of $\\tau$'s. But in the authors' setting, for a different $\\tau$, we need a different model. This requires the infinite amount of $\\tau$'s to get the whole distribution.\n3. Why do authors choose lognormal model as the baseline for survival modeling? One extra limitation seems to be that CQRNN could not get the full survival distribution." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 4 ]
[ "DKzGbbL5kge", "VFh4wmBzd3i", "X0RUh2mGs2o", "lw_XjbGnoRM", "lUWNEUBmFX", "uro6jvy6TPM", "nips_2022_pGcTocvaZkJ", "3VrjEXkhPja", "-TJhni2iFfj", "ywvkrYAnsf", "hTX64kd6KXh", "nips_2022_pGcTocvaZkJ", "nips_2022_pGcTocvaZkJ", "nips_2022_pGcTocvaZkJ", "nips_2022_pGcTocvaZkJ" ]
nips_2022_AsH-Tx2U0Ug
Effective Backdoor Defense by Exploiting Sensitivity of Poisoned Samples
Poisoning-based backdoor attacks are serious threat for training deep models on data from untrustworthy sources. Given a backdoored model, we observe that the feature representations of poisoned samples with trigger are more sensitive to transformations than those of clean samples. It inspires us to design a simple sensitivity metric, called feature consistency towards transformations (FCT), to distinguish poisoned samples from clean samples in the untrustworthy training set. Moreover, we propose two effective backdoor defense methods. Built upon a sample-distinguishment module utilizing the FCT metric, the first method trains a secure model from scratch using a two-stage secure training module. And the second method removes backdoor from a backdoored model with a backdoor removal module which alternatively unlearns the distinguished poisoned samples and relearns the distinguished clean samples. Extensive results on three benchmark datasets demonstrate the superior defense performance against eight types of backdoor attacks, to state-of-the-art backdoor defenses. Codes are available at: https://github.com/SCLBD/Effective_backdoor_defense.
Accept
The authors propose a new method for defending against backdoor attacks which is based on the observation that poisoned samples are more sensitive to transformations than clean samples. They design a metric called \textit{feature consistency towards transformations (FCT)} to distinguish poisoned samples from clean samples in the untrustworthy training set. The paper received favorable reviews and has made substantial updates during the rebuttal phase to the general satisfaction of the reviewers. I thus recommend accept.
train
[ "FIHJLUpCBJa", "JHLfrLBZaiU", "3dGD4ewg6LQ", "M0lAQb13NGv", "QTvcQmHlDXZZ", "VAdhUeKBVYK", "r1qHqHeqSKn", "mlZufn1NFVd", "yMhYBrvLL9G", "n7nzmn3mUWE", "KLUYeZz2-Bz", "OpH6jt706mD-", "K4GVYRZv44z", "USKcSwP-CTq", "ITZS4wmbEo1", "n63Bm_t8cO", "LAAoEhS-6CM", "ZQdazYtdTFu", "b2yeTSo-9F0", "4xjRo3AA04s", "cq2blWJobTA", "aWqaoddm5gX" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nThanks for you constructive comments which are really beneficial to our work. And we will add the limitation on the feature dimensionality into the final manuscript.", " I acknowledge I read all the responses and I want to thank the authors for their thorough explanations. My scores stay the same. \n\nI would like to see the limitation on the feature dimensionality in the final text of the paper. ", " Dear Reviewer, \n\nThanks for your encouraging and constructive comments, and they are very helpful for us to improve this work. \n\nSincerely,\nAuthors", " Dear Reviewer,\n\nThanks for your encouraging and constructive comments, which are really helpful to improve our work. ", " Thanks for your detailed response to my feedback. I have increased by score to a 7.", " I thank the authors for their response, which partially addressed my concerns. In particular, I appreciate the efforts of the authors that have clarified the specific novelty of this work. As such, I have raised my score accordingly.", " **Q2:** It might be good to test against optimization based poisons, such as Sleeper Agent, which don’t have patches on the training samples. I wonder if these can defeat your defense. They can also be made adaptive to beat your defense since you could add an objective that makes the features of poisoned samples insensitive to transformations.\n\n**A2:** Thanks for your insightful suggestion. We notice that the goal of the Sleeper Agent is to make samples from one particular source class misclassified as a target class when attached with the trigger, which is different from the goal of the mainstream attacks. Hence, we calculate ASR only on triggered source testing samples, instead of the entire testing set. Since Sleeper Agent is an optimization-based method, we assume the attacker is in a white-box setting, where he knows the model architecture used by the defender. With the data augmentations provided in the open source code, we could train a backdoored model with ACC=90.99% and ASR=71.20%. We then conduct our proposed D-BR to see whether it could remove the backdoor. The defense performance is shown in Table 13. Since there is no explicit or implicit objective to make features of poisoned samples insensitive to transformations, our defense could still be effective against such attack, where no patches are on the training samples.\n\n**Actually in our paper, we have tried a similar attack (SSBA [9]) to see how our defense methods perform against attacks where there are no patches on the poisoned training samples**. An example poisoned by SSBA is shown in Figure 9 (h) in Appendix C.3, which seems not different from the natural image to the human eye, similar to that by Sleeper Agent. Since SSBA takes effect on large-scale images such as ImageNet, we test our defense methods against SSBA attack on the ImageNet. As shown in Table 4 in Appendix D, our proposed method could effectively reduces ASR from 99.64% to **0.09%** while keeping the ACC as high as **83.77%**. For more analysis, please refer to Appendix D.\n\nIn fact, in order to illustrate that our proposed methods are not designed for any particular attack, we apply them to defend against various types of attacks. Particularly, we categorize existing poisoning-based backdoor attacks according to 5 criteria (*ie. size* *of trigger, visibility of trigger, variability of trigger, label-consistency, number of target classes*) as mentioned in Section 2, and choose 8 typical attack methods in the experiments which cover all categories under every criterion. The type of attack you have mentioned has also been considered in this paper. More details can be seen in Section 2 and Appendix C.3.\n\n**Table 13: Performance of backdoored model and D-BR against attacks with trigger invisible to human eye.**\n\n| Dataset | Attack | Backdoored model (ACC) | Backdoored model (ASR) | D-BR (ACC) | D-BR (ASR) |\n| -------- | ------------- | ---------------------- | ---------------------- | ---------- | ---------- |\n| CIFAR-10 | Sleeper Agent | 90.99% | 71.20% | 89.92% | 6.90% |\n| ImageNet | SSBA | 85.24% | 99.64% | 83.77% | 0.09% |\n\nLast but not least, we are so grateful for your suggestion that we may consider an attack which is be made adaptive to beat out defense since he could add an objective that makes the feature representations of poisoned samples insensitive to transformations. To this end, we consider **an adaptive attack** to see whether our proposed method is robust to such attack. As for the setting of the adaptive attack and the defense performance of our methods against it, please refer to **Response to FGt5: Q3**.\n\nBesides, in our paper, we conducted experiments on an ImageNet subset, which are detailed in Appendix D. And due to the limited rebuttal period, we will add the experiments on the full ImageNet into the revised manuscript. Thanks for your suggestions once again.", " **Q1:** For “paradigm 1” did you compare to very strong data augmentations, like mixup/cutmix/maxup, which have been shown to be strong defenses against backdoors or various adversarial training strategies that have been used against backdoor attacks?\n\n**A1:** Thanks for your inspiring suggestion in reminding us that the strong data augmentations could be a defense baseline for paradigm 1 (ie. secure training from scratch), which have been shown effective in defending against various attacks. To this end, we apply different strong augmentations on the poisoned dataset and train a model on the augmented data with the standard supervised learning to see whether the augmentations could prevent backdoor inserted into the model. Specifically, we choose three types of strong augmentations—Mixup, Cutout and CutMix, following implementation in https://github.com/facebookresearch/mixup-cifar10/tree/main, https://github.com/uoguelph-mlrg/Cutout, and https://github.com/clovaai/CutMix-PyTorch, respectively. Since data augmentations manipulate data in the input space, the properties (eg. pattern, distribution) of the trigger may have a large influence on the effectiveness of the augmentations. Hence, here we select three types of poisoning-based backdoor attacks—the patch-based attack BadNets (where the trigger pattern is a patch), the blend-based attack Blend (where the trigger pattern is an image) and the clean-label attack SIG (where the trigger is only attached on samples of target class). Results on the CIFAR-10 dataset are shown in Table 11.\n\n**Table 11: Comparison of D-ST and three strong data augmentations (Mixup, Cutout and CutMix) against three attacks on CIFAR-10.** **The best result (with the largest ACC-ASR) is denoted in bold.**\n\n| Attack | Backdoored model (ACC) | Backdoored model (ASR) | D-ST (ACC) | D-ST (ASR) | Mixup (ACC) | Mixup (ASR) | Cutout (ACC) | Cutout (ASR) | CutMix (ACC) | CutMix (ASR) |\n| ------- | ---------------------- | ---------------------- | ---------- | ---------- | ----------- | ----------- | ------------ | ------------ | ------------ | ------------ |\n| BadNets | 91.64% | 100% | **92.77%** | **0.03%** | 91.59% | 100% | 92.46% | 100% | 92.02% | 100% |\n| Blend | 92.69% | 99.99% | **91.82%** | **0.00%** | 92.09% | 100% | 92.71% | 100% | 92.86% | 99.98% |\n| SIG | 92.88% | 99.69% | **90.07** | **0.00%** | 92.14% | 99.90% | 92.78% | 99.68% | 93.12% | 99.81% |\n| Avg | 92.40% | 99.89% | **91.55%** | **0.01%** | 91.94% | 99.97% | 92.65% | 99.89% | 92.67% | 99.93% |\n\n**Table 12: Comparison of ASR** **in the first training epoch with different augmentation** **against three attacks on CIFAR-10.**\n\n| Augmentation \\ Attack | BadNets | Blend | SIG | Avg |\n| --------------------- | ------- | ------ | ------ | ------ |\n| None | 100% | 99.62% | 99.21% | 99.61% |\n| Simple | 97.81% | 99.21% | 87.33% | 94.78% |\n| Mixup | 47.38% | 98.83% | 26.96% | 57.72% |\n| Cutout | 18.90% | 96.78% | 20.72% | 45.47% |\n| CutMix | 1.48% | 93.47% | 22.17% | 39.04% |\n\n**Backdoored model:** If we directly apply the standard supervised learning on the poisoned dataset, we would obtain a backdoored model. As shown in Column 2,3 in Table 11, the average ACC is 92.40% while the average ASR is 99.89%.\n\n**D-ST:** By contrast, if we use our proposed D-ST to train a model from scratch, we could obtain a secure model. As shown in Column 4,5 in Table 11, the average ACC is 91.55% while the average ASR is 0.01%, demonstrating that D-ST could effectively prevent backdoor from inserting into the model while maintaining the high ACC.\n\n**Strong data augmentations:** However, when applying the three strong data augmentations, we discover that the performance of the trained models are similar to that of the backdoored model, all with high ACC and ASR as shown in Column 6-11 in Table 11. In this sense, it seems that these augmentations could not inhibit backdoor. Although the final ASR is high, we discover that the ASR in the first training epoch of strong data augmentations is lower than that of simple augmentations (eg. cropping, flipping) or without augmentation, as shown in Table 12. It tells that although failed to inhibit backdoor, the data augmentations could slow down the backdoor effect, since they increase the diversity of the trigger, which makes the backdoor harder to learn.", " **Defense against the input-aware backdoor attack:** \n\n**(1) Experimental setting:** \nDue to the different threat model, we have to firstly adjust the experiment, as follows:\n\n- *Step 1: generating the poisoned training dataset*: we conduct the input-aware attack on the same training dataset (*i.e.*, CIFAR-10) and the same model architecture (*i.e.*, ResNet-18) with the subsequent defense step. The attack could output both a backdoored model and a trigger generator. We generate triggers for 10% of the training samples, which form the poisoned training dataset. \n\n- *Step 2: baseline attack:* we train a backdoored model based on the above dataset using the standard supervised learning with 200 epochs. It serves as the baseline to measure the performance of our method. \n\n- *Step 3: our defense:* we conduct our two defense methods (D-BR and D-ST) to obtain secure models based on the above dataset. \n\n**Table 10: Performance of the backdoored model and our proposed methods against the input-aware attack on CIFAR-10.**\n\n| | Backdoored model | D-BR | D-ST |\n| ---- | ---------------- | ------ | ------ |\n| ACC | 92.23% | 89.69% | 89.08% |\n| ASR | 75.96% | 2.34% | 9.03% |\n\n**(2) Experimental results and analysis:** \n\n- *Baseline backdoored model*: As shown in Column 2 of Table 10, the backdoored model gives 92.23% ACC and only 75.96% ASR, indicating the attack is not very strong. Note that the reported ACC and ASR in the original paper are 94.65% and 99.32%. The reason of such difference is that the poisoned samples are optimized together with model parameters in input-aware attack, and when the optimized poisoned dataset are used to train another model, the difference of model parameters may significantly downgrade the backdoor effect. \n\n- *Our defense:* As shown in the last two columns of Table 10, our defense methods significantly reduce ASR while maintaining high ACC, showing its effectiveness of defending against the attack. However, it is also notable that the ASR is higher than that of defending against other attacks. The main reason is that the attack capability of this poisoned training dataset is weak (*i.e.*, only 75.96% ASR), while most other attacks achieve as high as 95% ASR. Recall that our FCT metric is based on the sensitivity of poisoned samples to transformations, which is mainly due to the overfitting of the backdoored model to the trigger. Given a weak attack, the overfitting to the trigger may be alleviated, and thus weakens our defense to some extent. Although alleviating the overfitting to the trigger could somewhat weaken our defense, the attack significantly sacrifices its own attack performance. Seeking an attack which avoids the overfitting of the trigger and also has high attack performance may be a promising direction for the poisoning-based backdoor attack.\n\n Anyway, the performance of our methods are still good. We infer that (a) there is no explicit or implicit optimization of trigger that encourages the features of poisoned samples insensitive to transformations, and (b) our methods are empirically proved robust even to adaptive attacks where the trigger is optimized to realize the aforementioned insensitivity (shown in the response to Reviewer FGt5: Q3). \n\nBesides, we have actually conducted several experiments to validate the effectiveness of our methods against attacks which have dynamic triggers. SSBA [9] is a strong attack which generates sample-specific triggers similar to those in input-aware attack. But different from the low ASR, SSBA could reach ASR as high as 99.64% on ImageNet. As shown in Table 5 of Appendix D, our method could effectively reduce ASR from 99.64% to 0.09% while keeping the ACC as high as 83.77%, which demonstrates the effectiveness of our method against dynamic triggers.", " **Q2:** For attacks such as FC, dynamic, and latent modification, are the proposed methods still effective?\n\n**R2:** Thanks for your constructive suggestion. \n\n**Clarification of threat model:** We would like to firstly clarify that, as clearly described in Line 102 in Section 3.1, \"*we consider the threat model of poisoning-based backdoor attacks*\", where the attacker can only manipulate the training data. In contrast, some suggested works require control over the training process, *i.e.*, the threat model of *training-controllable backdoor attacks*, which are out of scope of our work.\n\n**Descriptions of three suggested works:** We firstly describe the general information of each suggested attack, as follows:\n\n- *FC: Poison Frog! Targeted Clean-Label Poisoning Attacks on Neural Networks, NeurIPS 2018.* As clearly claimed in the Abstract of its orginal paper, *\"they control the behavior of the classifier on a specific test instance without degrading overall classifier performance\"*, it is not backdoor attack. Thus, it shouldn't be used to evaluate our backdoor defense methods. \n\n- *Dynamic: Input-aware Dynamic Backdoor Attack, NeurIPS 2020.* This work aims to jointly train a generator which generates input-aware triggers, and a backdoored model which can be activated by the generated triggers during testing. It belongs to the *training-controllable backdoor attack*, as we mentioned above. We find that the code of the github repository https://github.com/VinAIResearch/input-aware-backdoor-attack-release has been removed by its authors. Thus, in the later experiments, we adopt the implementation provided by the BackdoorBench (https://github.com/SCLBD/backdoorbench). \n\n- *Latent modification: Backdoor Attack with Imperceptible Input and Latent Modification, , NeurIPS 2021.* This work also assumes the attacker could have access to the model used by the defender, including both structures and parameters, and could insert the backdoor into model during training. It also belongs to the *training-controllable backdoor attack*. However, the code of this work has not been released. We checked the first author's homepage, there was a code link under this work, but it was linked with a repository of another ICCV 2021 work \"LIRA: Learnable, Imperceptible and Robust Backdoor Attacks\" (https://github.com/khoadoan106/backdoor_attacks). Due to the limited rebuttal period, we cannot guarantee to correctly implement this method correctly from the scratch, only according to the original paper. Hope the reviewer and Area Chair could understand this difficulty. ", " - *Differences between \"[3] Anti-backdoor learning: Training clean models on poisoned data. NeurIPS, 2021.\" (ABL) and our Distinguishment and Backdoor Removal (D-BR) method*: ABL and D-BR have the similar defense strategy that firstly learning the backdoor, then identifying the poisoned samples according to some principles, and finally removing the backdoor based on these samples. ABL and D-BR are different on the key step, i.e., **different** **principles of identifying poisoned samples**. Specifically, ABL utilizes the observation that the model fits poisoned samples much faster than clean samples, i.e., the former loss values decrease faster. Then, ABL designed a local gradient ascent technique to identify the poisoned samples according to the loss values at the early training epochs. In contrast, we utilize the observation that the feature representations of poisoned samples in a backdoored model are more sensitive to transformations than those of clean samples, which is measured by our proposed FCT metric. As evaluated in Section 4.3 (Line 48-265 and Figure 4), our FCT principle can identify different types of poisoned samples more stably than the ABL's principle. Moreover, the iterative relearning and unlearning mechanism in our D-BR method is also very important for the overall defense performance. As evaluated in Section 4.3 (Line 266-271 and Figure 5), it performs much better than the pure relearning and pure unlearning mechanism. \n\n**In summary**, although our methods and existing methods have the same goals (i.e., identifying poisoned samples, inhibiting the formation of backdoor or removing backdoor), their key techniques and the overall designs to achieve these goals are intrinsically different. We believe that above comparisons have clearly demonstrated these differences, which will be added into the revised appendix, to highlight the novelty and contribution of our methods to the backdoor learning community. We appreciate the reminding from the reviewer once again. ", " **Q1:** Please explain clearly the main contribution and what specific novelty of this work compared with others.\n\n**A1:** We appreciate the comment that \"this work is much like a patchwork combining existing techniques\". It reminds us to clarify the key differences with existing works more clearly, such that the novelties of our method can be highlighted. Here, we would like to explain the key differences with the mentioned 3 existing works individually. \n\n- *Differences between \"[1] Rethinking the trigger of backdoor attack. Arxiv 2020.\" (Rethinking) and our proposed feature consistency towards transformations (FCT)*. **(1) Different observation perspectives**: Rethinking locally changes the trigger to observe the change of the attack success rate (ASR), i.e., observing \"the property of existing attacks with static trigger\"; our FCT globally transforms both poisoned and clean images to observe their changes of the feature representations in the backdoored model, to further observe the different sensitivities to transformation between poisoned and clean images, i.e., observing the difference between poisoned and clean images. **(2) Different observation conclusions**: Rethinking concludes that \"the backdoor attack is sensitive to the difference between the training trigger and the testing trigger\" (the original sentence in [1]); our FCT concludes that \"the poisoned samples are much more sensitive to transformations than the clean samples in a backdoored model\" (the original sentence in our manuscript), and our FCT metric provides a quantitative measure of this sensitivity. **(3) Different usages**: Rethinking utilizes the trigger sensitivity in two ways, including \"the transformation-based defense is defined as introducing a transformation-based pre-processing module on the testing image before prediction\" and augmenting the poisoned samples with transformations to enhance its transformation robustness. Our FCT metric is used to distinguish poisoned samples from clean samples, which plays the key role in the following backdoor defense methods (i.e., D-ST and D-BR). We believe above comparisons are sufficient to demonstrate the instrinsic difference between [1] Rethinking and our FCT metric. \n\n- *Differences between \"[2] Backdoor defense via decoupling the training process. ICLR, 2022.\" (DBD) and our Distinguishment and Secure Training (D-ST) method*: Both DBD and D-ST aim to inhibit backdoor during training (ie. realize secure training). However, there are several significant differences. **(1) Different principles of identifying poisoned samples:** **The core point** of secure training is distinguishing poisoned samples from clean samples in the training dataset. The identification principle of DBD is that the loss values of poisoned samples are larger those that of clean samples when training the classifier (the second stage of DBD). Our identification principle is that the feature representations of poisoned samples in a backdoored model are more sensitive to transformations than those of clean samples, which is measured by our proposed FCT metric. It is clear that **our D-ST is different with DBD on this core point of a secure training-based backdoor defense**. Besides, as evaluated in Section 4.3 (Line 48-265 and Figure 4), our FCT principle can identify different types of poisoned samples more stably than the DBD's principle. **(2) Different usages and roles of the identified poisoned samples.** In DBD, since the poisoned samples are identified at the end of the second stage (i.e., after training the classifier), they can only be used for semi-supervised fine-tuning via removing the labels of the identified samples to improve the clean accuracy at the last stage. In our D-ST, since the poisoned samples are identified at the beginning, we can utilize them in all training stages. **First**, they are utilized to train a good feature extractor via semi-supervised contrastive learning (SS-CTL), based on labels of the idenfied clean samples. As evaluated in Section 4.3 (Line 281-195 and Table 3), the defense performance using SS-CTL is much better than that of using CTL (which is utilized by DBD), especially on clean accuracy. **Second**, the identified poisoned and clean samples are utilized to design the mixed cross-entropy loss (MCE loss, see Line 167 and Eq. (3)) to learn the classifier. As evaluated in Section 4.3 (Line 296-315 and Figure 6), both loss functions in MCE are important for the overall defense performance. It is clear that the poison identification plays the secondary role in DBD (learning the backbone using CTL plays the key role), while the poison identification plays the key role in our D-ST method. **In summary, there are intrinsic differences on the design logic, the key technology, between DBD and D-ST.** These differences are further reflected by the different defense performance. As shown in Section 4.2 and Table 1, D-ST performs much better than DBD. ", " **(b) Defense against adaptive attack with 100 training epochs**: As analyzed above, since the regularization term in the attack's objective function is not well fitted in 2 epochs, we then suppose the adaptive attacker trains the trigger with 100 epochs. However, as shown in the last column of Table 8, we find out that there is still a gap between the distribution of clean and poisoned samples since the mean FCT value of clean samples is 0.31, which is much smaller than that of poisoned sample, *i.e.*, 7.01. However, the overlap between the clean and poisoned distributions becomes larger, compared to the above case of 2 epochs. Consequently, the poison-precision decreases to 81.36%, while the clean-precision is still very high, up to 99.38%. It illustrates that our SD module is somewhat affected by the optimized trigger.\n\nAs shown in Table 9, when we apply the BR module and the ST module subsequently, our proposed defense methods are still effective. For the backdoor removal paradigm, compared to the backdoored model, although ACC slightly drops by 2.19%, D-BR could effectively reduce ASR from 99.99% to 3.33%. For the secure training paradigm, D-ST also achieves performance comparable to D-BR. These results demonstrate that compared to 2 epochs, the attack with the trigger trained with 100 epochs is more threatening. But our proposed methods could still obtain a high-performance model with relatively low ASR, illustrating that it would not be easy for the adaptive attack to insert a backdoor deep into model, even with the white-box setting.\n\nThe possible reason is that although the trigger trained with 100 epochs could better encourage the insensitivity of poisoned samples in the corresponding backdoored model, our SD module will not use this backdoored model. Instead, our SD module will train a new backdoored model with only 2 epochs based on the returned poisoned datasets. Consequently, the sensitivity of poisoned samples becomes larger in the new backdoored model used in SD module, causing the failure of the adapative attack. \n\n**Summary:** According to above evaluations and analysis, we surmise that, since the speed of inserting the backdoor and that of training a good adaptive trigger are much different, it is difficult to obtain a good trigger and a model similar to the defender-used one at the same time, causing the difficulty in designing a successful adaptive attack. In this sense, we could claim that to some extent, our defense method is robust to the possible adaptive attack, even in the most restricted white-box setting. \nFurthermore, in practice, even with a potential successful adaptive attack, simply changing the type of transformations or the model architechture used in our defense methods could be considered as an easy way to defend against the adaptive attack. \nHowever, we will keep exploration of possible adaptive attacks against our proposed defense methods in our future work, to continually contribute to the development of the backdoor learning. ", " **Table 8: The FCT mean and variance of clean and poisoned samples, clean-precision and poison-precision under different model-training epochs.**\n\n| num of training epochs $e_t$ | FCT Mean of clean samples | FCT Var of clean samples | FCT Mean of poisoned samples | FCT Var of poisoned samples | Clean-precision | Poison-precision |\n| ---------------------------- | ------------------------- | ------------------------ | ---------------------------- | --------------------------- | --------------- | ---------------- |\n| 2 | 0.00 | 0.00 | 24.26 | 22.56 | 100% | 100% |\n| 100 | 0.31 | 7.01 | 7.70 | 23.07 | 99.38% | 81.36% |\n\n**Table 9: Performance of the backdoored model and our proposed methods against the adaptive attack with trigger trained with different epochs.**\n\n| num of training epochs $e_t$ | Backdoored model (ACC) | Backdoored model (ASR) | D-BR (ACC) | D-BR (ASR) | D-ST (ACC) | D-ST (ASR) |\n| ---------------------------- | ---------------------- | ---------------------- | ---------- | ---------- | ---------- | ---------- |\n| 2 | 90.53% | 99.98% | 91.36% | 0.81% | 93.51% | 0.02% |\n| 100 | 90.70% | 99.99% | 88.51% | 3.33% | 88.82% | 4.54% |\n\n**Results and analysis**: The results of the SD module against the trained poisoned dataset of the adaptive attack are shown in Table 8, and the defense results of the proposed D-BR and D-ST are shown in Table 9. Here, we present two cases: the attacker with 2 training epochs and 100 training epochs. \n\n**(a) Defense against adaptive attack with 2 training epochs**: As described in Section C.5 in Appendix (see Line 527), it is notable that the backdoored model adopted to compute the FCT metric is trained with just $e_t=2$ epochs, since it is enough to insert the backdoor. Thus, we firstly suppose that the adaptive attacker also uses 2 epochs to train the trigger. \nAs shown in the first row of Table 8, there is still remarkable difference on the distribution between the poisoned and clean samples, similar to what is shown in Figure 3 in the manuscript. Since figures cannot be shown here, we exhibit the distribution in the form of mean and variance. For clean samples, both the mean and variance of their FCT values are very close to 0.00. In contrast, for poisoned samples, the mean and variance are 24.26 and 22.56, respectively. More specifically, the smallest FCT value among all poisoned samples is 3.81, and there are only 20 clean samples (out of the total 45500 clean samples) that are greater than 3.81. The result demonstrates that the overlap between the distribution of clean and poisoned samples is tiny, which is helpful for distinguishing samples. As a result, both the clean-precision and the poison-precision (defined in Appendix E, *i.e.*, the precision of the distinguished clean and poisoned samples, respectively), are **100%**. It illustrates that our SD module is still effective in distinguishing samples, not influenced by the optimized trigger.\n\nConsequently, as shown in Table 9, when we apply the BR module based on the above distinguishing result, our proposed D-BR method could effectively reduce ASR from 99.98% to 0.81%, and even improve ACC by 0.83%, compared to the backdoored model. Moreover, the proposed D-ST method could even perform better. It securely trains a model from scratch which has ACC as high as 93.51% (2.98% higher than that of the backdoored model) and keeps ASR as low as 0.02%. The defense performance demonstrates that our proposed two defense methods are still effective in defending against the adaptive attack.\n\nThe main reason is that the backdoor could be quickly learned within 2 epochs (*i.e.*, the value of the first loss term in the above objective function decreases quickly), while the regularization term in the objective is difficult to be fitted within 2 epochs. Consequently, the poisoned samples with the trigger trained in 2 epochs are still more sensitive to transformations than clean samples, causing the good defense performance of our methods. ", " **Q3:** Adaptive attack: \"Is it possible to train a trigger in a white-box setting with all of the hyper-parameters of the defense method known to the attacker?\"\n\n**A3:** Thanks for this constructive suggestion. In the following, we firstly clarify the detailed setting, the objective function and the implemented algorithm of the adaptive attack. Then we will present the defense performance of our methods against this adaptive attack, and provide an analysis of the results.\n\n**What the adaptive attacker knows:** As suggested by the reviewer, here we adopt a full white-box setting that all of the hyper-parameters of the defense method are known to the attacker. Specifically, the attacker knows: **(a)** the defender will distinguish samples according to the sensitivity of samples to transformations which is measured by the FCT metric (*i.e.*, $\\Delta_{trans}(x; \\tau,f_{\\theta_e}) = \\Vert f_{\\theta_e}(x)-f_{\\theta_e}(\\tau(x)) \\Vert_2^2$), **(b)** $f_{\\theta_e}(\\cdot)$ represents the feature extractor of a backdoored model which is trained on the poisoned dataset with supervised learning, and **(c)** what kind of transformations $\\tau(\\cdot)$ and the model architecture the defender would use.\n\n**The objective function of the adaptive attack:** The goal of the adaptive attacker is to optimize a trigger which could make the feature representations of the poisoned samples insensitive to the transformations. Note that the feature representations are certain intermediate outputs of *a backdoored model*. To this end, the adaptive attack could be formulated as follows: \n\n$\\min_{\\theta_e, \\theta_c, \\delta} \\frac{1}{\\vert \\bar{D_{train}}\\vert}\\sum_{(x,y)\\in \\bar{D_{train}}}-\\log [h_{\\theta_c}(f_{\\theta_e}(x))]_y$\n\n$+ \\frac{1}{2\\vert D_p\\vert} \\sum_{(x,y)\\in D_p} \\Vert f_{\\theta_e}(x)-f_{\\theta_e}(\\tau(x)\\Vert_2^2,$\n\nwhere $\\bar{D}_{train}=D_c \\cup D_p$ \n\nand $D_p = \\{(x_i\\oplus \\delta, t)\\}_{i=1}^{m_p}$. The first item is inserting backdoor into the model while the second item is realizing the insensitivity.\n\n**The algorithm to optimize the above objective function:** We adopt the patch-based type backdoor attack, *i.e.*, BadNets, and replace its static grid pattern trigger (shown in Figure 9(a) in Appendix) as a learnable variable. The trigger variable is initialized as the random noise sampled from uniform distribution. We optimize the parameters of the model and the trigger variable simultaneously, using the standard backpropagation with SGD, with the maximal 100 epoches.", " **Q2:** All the experiments are on a single dataset. This raises questions around how easy it is to tune the hyper-parameters on a different dataset and model architecture, and how sensitive FCT is to the dimensionality of embeddings?\n\n**R2:** Thanks for this insightful comment. Firstly, we would like to clarify that we have actually conducted experiments on three datasets, including CIFAR-10, CIFAR-100 and an ImageNet subset, as demonstrated at Line 196-197. Due to space limit, the results on ImageNet are shown in Table 5 in Appendix D. In the following, we will explain the sensitivity of hyper-parameters to different datasets, model architectures, and feature dimensionalities, respectively. \n\n**Sensitivity to datasets:** As mentioned in Appendix C.5, we adopt the same setting of hyper-parameters on all of the datasets. Taken the D-BR method as an example, the average ASR reaches as low as 0.31%, 0.07% and 0.02% on the datasets. Meanwhile, compared with the backdoored model, the average ACC merely drops by 0.04%, 1.07% and 1.92%. The superior performance across three datasets demonstrates that the hyper-parameters are stable across different datasets.\n\n**Table 7: Performance under different model architectures against BadNet attack on CIFAR-10 dataset.**\n\n| Model architecture | Dimensionality of feature representation | Clean-precision of $\\hat{D}_c$ | Poison-precision of $\\hat{D}_p$ | Backdoored model (ACC) | Backdoored model (ASR) | D-BR (ACC) | D-BR (ASR) | D-ST (ACC) | D-ST (ASR) |\n| ------------------ | ---------------------------------------- | ------------------------------ | ------------------------------- | ----------------------- | ---------------------- | ---------- | ---------- | ---------- | ---------- |\n| ResNet-18 | 512 | 100% | 100% | 91.64% | 100% | 92.83% | 0.40% | 92.77% | 0.03% |\n| ResNet-50 | 2048 | 100% | 9.00% | 90.88% | 100% | 88.47% | 0.00% | 90.32% | 5.89% |\n| VGG-19 | 512 | 100% | 100% | 91.09% | 100% | 90.90% | 0.00% | \\ | \\ |\n| DenseNet-161 | 8832 | 99.93% | 21.09% | 90.84% | 100% | 89.82% | 0.00% | \\ | \\ |\n\n**Sensitivity to model architectures and feature dimensionalities:** In addition to the ResNet-18 we have tested, here we conduct experiments on another three mainstream model architectures, including ResNet-50, VGG-19 and DenseNet-161, with the same parameter setting. Results are shown in Table 7. Besides, we uniformly choose the output of the penultimate layer as the feature representation, resulting in different dimensionalities. Thus, these results can also reflect the sensitivity to feature dimensionalities. \n\nIn the following, we will analyze the effectiveness of different modules so as to evaluate the sensitivity of the hyper-parameters.\n\n- **Effectiveness of SD module:** Both ResNet-18 and VGG-19 achieve 100% precision. In contrast, the poison-precision of ResNet-50 and DenseNet-161 is relatively low, indicating that with the increase in the dimensionality, the gap between the FCT of clean and poisoned samples may be smaller. However, since their clean-precision is as high as about 100 % and that our methods are robust to wrong distinguishment as analyzed in Appendix E, this low poison-precision won't significantly influence the final performance which is analyzed subsequently. \n- **Effectiveness of BR module:** Compared with the performance of the backdoored model, our proposed BR module (*i.e.*, the D-BR method) could reduce ASR from 100% to 0% on the three new architectures. Meanwhile, ACC drops by 2.41% at most.\n- **Effectiveness of ST module:** Note that since SupContrast [29] (used in stage 1 of D-ST) only released codes for the ResNet architecture, and due to the time limit of rebuttal, here we didn't evaluate the ST module on VGG-19 and DenseNet-161. Compared with the backdoored model which directly employs the supervised learning, our proposed ST module (*i.e.*, the D-ST method) trains a secure model from scratch. \n\n**Summary:** The hyper-parameters of our methods are generalizable across datasets, model architectures and feature dimensionalities. However, we also found that the feature dimensionality may affect the setting, as the Euclidean distance used in our FCT metric may not be suitable for high dimensonal feature space. We will explore it in the future. We sincerely appreciate the reviewer's insightful comments once again.", " **Q1:** More experiments missing about the detection rate of the FCT metric and the sensitivity to hyper-parameters. \n\n**R1:** Before directly answering these two concerns, we would like to briefly list the related experiments we have conducted. Then, based on these existing experiments, we will give responses to the reviewer's concerns. \n\n**Related experiments we have conducted:**\n\n- **(1) First part of Section 4.3** answers questions as: How about the effectiveness of the SD module? How does our proposed FCT metric perform compared with other metrics?\n\n- **(2) Appendix E** answers questions as: How is the precision of the clean and poisoned samples distinguished by the FCT metric, respectively? How do the selected data transformations affect the precision?\n\n- **(3) Appendix F** answers questions as: How sensitive is our method to the proportion values $\\alpha_c, \\alpha_p$? How do they affect the defense performance, respectively?\n\n- **(4) Appendix G** answers questions as: How stable is our method against different poisoning rates, even with the invariant $\\alpha_c, \\alpha_p$? Is the SD module still effective?\n\nWe think that Appendix E,F,G could answer most of your concerns with adequate details. For your convenience, here we reorganize the above experiments to present a more clear explanation, as follows. \n\n**(a) Detection rate of the FCT metric:** The detection rate you mentioned is exactly the precision we defined in Appendix E. Specifically, the clean(poison)-precision refers to precision of the distinguished clean(poisoned) samples $\\hat{D}_c$($\\hat{D}_p$). As shown in Figure 11, we presented the clean/poison-precision matrix under different pairs of transformations adopted in the FCT metric. It is shown that the precisions are very high (almost up to 100%) in most cases. Specifically, in our main experiments, we specified the first transformation as *rotate* and the second one as *affine*, corresponding to the precision at Row 1, Column 2 of each precision matrix, which is as high as 100% in most cases. We think that the study could demonstrate the effectiveness of the FCT metric, as well as its stableness to different transformations. \n\n**(b) Sensitivity to $\\alpha_c, \\alpha_p$?:** $\\alpha_c$($\\alpha_p$) denotes the proportion of samples that are identified as clean(poisoned). As shown in Figure 3, the ground-truth clean samples tend to have a smaller FCT, while the ground-truth poisoned samples have a larger FCT. Thus, we sorted the training samples according to the ascending order of FCT, where the first $\\alpha_c$ samples are identified as clean, while the last $\\alpha_p$ as poisoned. More details of this algorithm are presented in Appendix A.1.\n\nNote that many existing backdoor works adopted the setting of 10% poisoning rate, which is also adopted in our main experiments (see Line 195). To avoid incorrectly identifying ground-truth clean samples as poisoned, we set $\\alpha_p$ as 5% in the main experiments. Besides, we also conducted experiments in Appendix G to see how this $\\alpha_p$=5% setting performs under an exact poisoning rate of 5%. Besides, we find that with $\\alpha_p$=5%, correctly identifying a few clean samples (*i.e.*, $\\alpha_c$=20%) is sufficient to achieve good defense performance using our methods, which is empirically validated by the following ablation study.\n\nIn order to evaluate the sensitivity of our methods to different values of $\\alpha_c$ and $\\alpha_p$, we fixed one hyper-parameter while varying the other one to see the changing of the defense performance, as shown in Figure 13 in the Appendix F.\n\n- **Fixing $\\alpha_p$ as 5% and varying $\\alpha_c$ from 0% to 80%:** As shown in Figure 13(a), we found that as $\\alpha_c$ grows, ACC increases steadily and finally converges. However, when $\\alpha_c$ is too large (eg. 80%), the distinguished clean samples may contain ground-truth poisoned ones, resulting in the rise of ASR. Hence, the range from 20% and 40% is appropriate for $\\alpha_c$.\n\n- **Fixing $\\alpha_c$ as 20% and varying $\\alpha_p$ from 0% to 20%:** As shown in Figure 13(b), we found that with the increase of $\\alpha_p$, ASR declines steadily and finally converges. Nevertheless, excessive distinguished poisoned samples with larger $\\alpha_p$ could hurt the model and lead to a reduction in ACC. Therefore, a moderate $\\alpha_p$ (*e.g.*, the range from 5% to 10%) is preferred.\n\nThe studies shown in Figure 13 demonstrate that our method could show superior and stable defense performance within a relatively wide range of values of $\\alpha_c$ and $\\alpha_p$. \n\nMoreover, in addition to the above analysis, we have also presented many other experiments in Appendix E,F,G to explore the SD module from different aspects.", " **Q1:** In the introduction, it is suggested that the authors add the background to the application of the proposed method, thus enhancing the motivation for the paper. \n\n**A1:** Thanks for pointing out the importance of the threat model we discuss in this paper, and we are encouraged by your positive comments on our work. Following your suggestion, we will add the following background into the revised manuscript, as follows.\n\nThe tremendous success of DNNs in a variety of fields relies heavily on the growing availability of large datasets. Training a DNN with good performance often requires a large amount of training data. In this way, annotating all data manually is considered to be time-consuming and unrealistic. As a result, practitioners usually obtain data by purchasing from a third-party data provider or collecting some open-sourced databases. However, some malicious attackers may poison the dataset so that when a user downloads this dataset and trains a model on it locally, the trained model could contain certain stealthy backdoor, which could be activated by a particular trigger by the attacker. This kind of attack is called data poisoning-based backdoor attack, which could pose a serious security threat to the DNN training. Therefore, designing a method to defend against this kind of attack is of great value and importance, which is the motivation for our work.\n\n------\n\n**Q2:** The authors only list the researches for the image super-resolution and don't detail the reasons why these approaches are not sufficient for their goal.\n\n**A2:** We are confused about the key word *image super-resolution* in the above concern, as our work is not related to this topic. Would you please provide some clarification? We are willing to answer any further concern. ", " In this paper, the authors reveal the sensitivity of poisoned samples to transformations and propose a sensitivity metric FCT. And the authors propose two effective backdoor defense methods for training a secure model from scratch and removing backdoor from the backdoored mode. The authors design a simple sensitivity metric to distinguish poisoned samples from clean samples in the untrustworthy training set, and propose two effective backdoor defense methods. The paper tackles an interesting issue, and the efforts of the authors are clear in investigating the problem and in writing the manuscript. 1. In the introduction, it is suggested that the authors add the background to the application of the proposed method, thus enhancing the motivation for the paper.\n2. The authors only list the researches for the image super-resolution and don't detail the reasons why these approaches are not sufficient for their goal. Yes, the authors adequately addressed the limitations and potential negative societal impact of their work.", " In this paper the authors: (a) Attribute the effectiveness of backdoor attacks to overfitting which they measure by proposing a metric called Feature Consistency towards Transformations. Using this metric they partition a given training dataset into potentially clean and potentially polluted samples and (b) They propose two defense algorithms (i) train a secure model from scratch (ii) Training a backdoored model first then removing the backdoor from the model.\n\nTo show (a) they pass the original image with the trigger and an augmentation of the triggered image to the same model and calculate the L2 distance in the feature embedding space. They define large distances mean more sensitivity. Using this metric they create a \"sample-distinguishment\" SD module to partition a given dataset by thresholding the FCT scores.\n\nFor defending against backdoor attacks from a potentially polluted dataset they propose two approaches:\n1. Secure Training From Scratch: They follow an approach similar to [29] Contrastive Supervised Learning where they have a two term contrastive loss (a) self-supervised loss on clean and polluted samples (b) supervised loss on clean vs polluted designated by the SD module using FCT thresholding. After they train the feature extractor, they train a classifier on top by minimizing a two term cross entropy loss (i) increasing the likelihood of predicting clean samples as their correct class (i) decreasing the likelihood of predicting the identified polluted samples as incorrect.\n\n2. Unlearn and relearn: on the found clean and polluted sets, they iteratively decrease the likelihood of the polluted predictions and increase the likelihood of the clean predictions using cross entropy loss.\n\nThey show with extensive ablation studies the effectiveness of the different choices they make in their defense algorithms.\n Strengths:\n- Novelty of their method and uncovering an interesting fact about the backdoored models and triggers.\n- Presentation and clarity.\n- Thorough experimentation for supporting their claims on the proposed defense method.\n\nWeaknesses:\n- All the experiments are on a single dataset, CIFAR. This raises questions around how easy it is to tune the different hyper parameters in the model on a different dataset and model architecture.\n\n- More experiments missing about the detection rate of the FCT metric. Simple ROC curves could give a sense of the power of their metrics. I understand that the first ablation study is trying to indirectly address this but since this is such an important idea in the paper, it could be isolated and tested.\n - Are the chosen hyper parameters easily transferrable to other datasets and network architectures?\n- How sensitive is FCT to the dimensionality of embeddings since it's calculating a euclidean distance? Will it become harder to find a threshold for higher dimensions?\n- How are the 0.05 and 0.2 thresholds set for the SD module?\n- Is it possible to train a trigger in a whitebox setting with all of the hyper parameters of the defense method know to the attacker?\n\n - An important aspect of the work is setting the threshold for SD. It is not explained how this threshold can be tuned without knowing beforehand which samples contain the trigger and the distribution of distances. Or it's not discussed how sensitive the method is setting these thresholds to a different value.\n\n- A problem can be that in a whitebox setting the attacker can design the trigger such that FCT and its set thresholds are not effective by incorporating FCT in attack generation process by adding the inverse FCT to their optimization objectives for finding the trigger.", " Through the observation that clean and backdoored data with a dissimilar feature representation after data transformation techniques (e.g. rotation, scaling), the author proposed a sensitivity metric, feature consistency towards transformations (FCT), to detect the potential backdoor samples. The author further proposed two backdoor removing modules with inspiration from the existing defenses of semi-supervised learning and backdoor unlearning. Extensive results show that the proposed methods outperform the existing backdoor defenses either in backdoor detection or backdoor removal.\n This paper is well-organized and the main insight is the observation of feature separation by data transformations. The defense solution including a secure training (ST) module and a backdoor removal (BR) module seems simple yet effective against various classical backdoor attacks. At the same time, I have some concerns about the paper's contributions:\n\n- At a first glance, the proposed methods seem as a unified framework including backdoor detection and backdoor defense forming an integral defense pipeline from training to post-training phase. But after a depth-reading, I consider this work is much like a patchwork combining existing techniques. For instance, the proposed feature detection matrix has been reflected in [1], and the other two defense modules ST and BR are much related to the DBD [2] and ABL [3] respectively. So, please explain clearly the main contribution and what specific contribution\\novelty of this work compared with others.\n\n- In addition, I doubt the generality of this method, for which the underlying assumption is that there are differences in feature representation between backdoor samples and clean samples. In fact, this assumption is often related to the design of backdoor models, and trigger types (static trigger, dynamic trigger, optimization-based trigger, etc). For more powerful adaptive attacks such as FC [4], dynamic [5], and latent modification [6], this assumption is no longer valid. In this case, how to ensure adequate defense? The author is invited to give an explanation and further experimental evidence.\n\n[1] Li Y, et al. Rethinking the trigger of backdoor attack. arXiv preprint arXiv:2004.04692, 2020. \n[2] Huang K, et al. Backdoor defense via decoupling the training process. ICLR, 2022. \n[3] Li Y, et al. Anti-backdoor learning: Training clean models on poisoned data. NeurIPS, 2021. \n[4] Shafahi A, Huang W R, Najibi M, et al. Poison frogs! targeted clean-label poisoning attacks on neural networks. NeurIPS, 2018. \n[5] Nguyen T A, Tran A. Input-aware dynamic backdoor attack. NeurIPS 2021. \n[6] Doan K, Lao Y, Li P. Backdoor attack with imperceptible input and latent modification. NeurIPS, 2021. It is still unclear that the underlining assumption behind ‘feature consistency' is supported efficiently by the proposed solutions with only empirical results. It needs some explanation about this result to backup this drawback.\n", " The authors observe that the features of poisoned samples are more sensitive to image transformations and use this observation to filter out poisoned data from the training set. They use this for two different defenses, one to filter out training data before training and one to remove a backdoor from an already backdoored model. They test against a range of attacks and compare to a variety of defenses. I thank the authors for their interesting work! This topic is important for security, and I think the NeurIPS community will benefit from new works on defense against poisoning. The experimental results are pretty strong, and the work compares to other defenses and a range of attacks, so I lean towards accepting.\n\nFeedback can be found below:\n\nFor “paradigm 1” did you compare to very strong data augmentations, like mixup/cutmix/maxup, which have been shown to be strong defenses against backdoors or various adversarial training strategies that have been used against backdoor attacks?\n\nIt might be good to test against optimization based poisons, such as Sleeper Agent, which don’t have patches on the training samples. I wonder if these can defeat your defense. They can also be made adaptive to beat your defense since you could add an objective that makes the features of poisoned samples insensitive to transformations.\n\nMinor point: Full ImageNet does not seem prohibitively expensive here, so it would be good to test there, since it’s a standard setting.\n N/A The authors refer to Appendix G for limitations which just contains experiments with different percentages of training samples poisoned. I don't think this fully encompasses limitations of this paper, but I think the limited testing scenarios are clear to a reader, so it's not that big of a deal." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "JHLfrLBZaiU", "4xjRo3AA04s", "VAdhUeKBVYK", "QTvcQmHlDXZZ", "r1qHqHeqSKn", "OpH6jt706mD-", "aWqaoddm5gX", "aWqaoddm5gX", "cq2blWJobTA", "cq2blWJobTA", "cq2blWJobTA", "cq2blWJobTA", "4xjRo3AA04s", "4xjRo3AA04s", "4xjRo3AA04s", "4xjRo3AA04s", "4xjRo3AA04s", "b2yeTSo-9F0", "nips_2022_AsH-Tx2U0Ug", "nips_2022_AsH-Tx2U0Ug", "nips_2022_AsH-Tx2U0Ug", "nips_2022_AsH-Tx2U0Ug" ]
nips_2022_er4GR0wHWQO
Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm
The Partial Area Under the ROC Curve (PAUC), typically including One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC), measures the average performance of a binary classifier within a specific false positive rate and/or true positive rate interval, which is a widely adopted measure when decision constraints must be considered. Consequently, PAUC optimization has naturally attracted increasing attention in the machine learning community within the last few years. Nonetheless, most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable. Fortunately, a recent work presents an unbiased formulation of the PAUC optimization problem via distributional robust optimization. However, it is based on the pair-wise formulation of AUC, which suffers from the limited scalability w.r.t. sample size and a slow convergence rate, especially for TPAUC. To address this issue, we present a simpler reformulation of the problem in an asymptotically unbiased and instance-wise manner. For both OPAUC and TPAUC, we come to a nonconvex strongly concave min-max regularized problem of instance-wise functions. On top of this, we employ an efficient solver that enjoys a linear per-iteration computational complexity w.r.t. the sample size and a time-complexity of $O(\epsilon^{-1/3})$ to reach a $\epsilon$ stationary point. Furthermore, we find that the min-max reformulation also facilitates the theoretical analysis of generalization error as a byproduct. Compared with the existing results, we present new error bounds that are much easier to prove and could deal with hypotheses with real-valued outputs. Finally, extensive experiments on several benchmark datasets demonstrate the effectiveness of our method.
Accept
The paper presented a novel reformulation of maximzing PAUC in an asymptotically unbiased and instance-wise manner. Based on this formulation, the authors presented an efficient stochastic min-max algorithm for OPAUC and TPAUC maximization. Convergence and generalization analysis were conducted. The concerns and questions are well addressed in the rebuttal. Following the recommendation from the reviewers, I recommend its acceptance.
train
[ "NC_kPC_FyZ", "my5XGQ2wYAv", "zx9ApGRjs_", "E6gzBn1k0oz", "jYA0GZmRPOZ", "lsAm1SNwur3", "RPmibdUySzT", "C06fF-b0qdb", "7JBCjnmMgX5", "piBdHZRPCCk", "UvDhsX55nbV", "p0NKdSs56V", "-HnuACj0lr3", "bYm0TjxljJe", "URU8OW1IzgH", "rplbj07AGSJ", "bjlaaHXzfFM", "RWM3108R5SF", "4pzM61h-5Hm", "TnDol5yBhgaJ", "u3CK0Ri2BMR", "m0eUZkJgwDD", "sxD3QRCV3Hh", "k92K3JuWulV", "PSe3kYoKbZz", "TG5-uSKkMzQ", "9_QOmOPnbwW" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors have addressed all my concerns. I'm very appreciated to see the new version with so many revisons done! So, I'm happy to keep my score as a strong accept.", " Thanks for making modifications. I raised my score since the authors provide solid proof to fix the problems and make necessary modifications in the paper. \n\n", " Thank you so much for your suggestion!\n\n1. We have changed the words in the abstract to \n\n> we employ an efficient solver that enjoys a linear per-iteration computational complexity w.r.t. the sample size and a time-complexity of $O(\\epsilon^{-1/3})$ to reach a $\\epsilon$ stationary point.\n\nwhere the comparison is removed\n\n2. We have corrected the name in Sec.6.2 and 6.2. Please check the main paper.\n\n3. We have removed Tab.4 as you suggested.\n\n4. Now our main paper is less than 9 page. According to the instruction of NeurIPS, references and the checklist should not be counted. ", " Thank you very much for your detailed response and it solves my problem! I checked the rebuttal revision and noticed that you have fixed most theorems and unfair tables. However, there are some places still needed to be fixed. If you can submit a new version with following modifications, I would raise my score.\n\n1. Since you have removed the unfair comparison with [39], the complexity comparison in **abstract** should also be removed.\n\n2. To be consistent with the Table 1, I would suggest you fix some typos about algorithms' names in Sections 6.2 and 6.3.\n\n3. It is claimed that your algorithm can be applied to OPAUC optimization within a fixed FPR range (Line 225-227). To show that, formulation of partial AUC within a fixed FPR range should be provided in the paper. Otherwise, I would suggest removing Table 4.\n\n4. The page limit of the paper is 9 pages.\n", " Dear the ACs, and the Reviewers, \nThank you so much for your valuable comments! They really helped us improve our manuscript! We have added all our revisions in the new version of our paper, including both the main file and the appendix in the supplementary materials.", " Thanks for the response and all the clarifications!\n\nI will increase my score due to the corrections and clarifications made.", " Thank you so much for your timely feedback as well as the inspiring questions. \n\n> **(Q1)** The factor of $p$, $1-p$\n\n **(A1)** Thank you so much for your correction. There is a typo in the main paper, for which we are sorry. Actually, in the appendix we have corrected it. We will correct it in the new version. \n\n\n> **(Q2)** The problem of Theorem 1 \n\n **(A2 part 1)**\nWe are so grateful for the reviewer's double-check! The reviewer's concern seems to be about the correctness of using the constraint $\\gamma \\in [b-1,1]$ to replace $\\gamma \\in [-1,1]$ in the reformulation. \nWe are sorry for omitting the proof of the details. In fact, we can show that this is correct. Our proof can be established by Lemma A, Lemma B, and Theorem A. We will add it in the appendix in the new version.\n\n> **Throughout the proof, we will define:**\n$$\n\\begin{aligned}\n&a^* = \\mathbb{E}\\_{\\boldsymbol{x}\\sim\\mathcal{D}\\_{\\mathcal{P}}}[f(\\boldsymbol{x})] &:= E\\_+\\\\\\\\\n&b^* = \\mathbb{E}\\_{\\boldsymbol{x}'\\sim\\mathcal{D}\\_{\\mathcal{N}}}[f(\\boldsymbol{x}')|f(\\boldsymbol{x}')\\ge \\eta\\_\\beta(f)] &:=E\\_- \\\\\\\\\n&b^* - a^* &:= \\Delta E\\\\\\\\ \n&\\tilde{a}^* = \\mathbb{E}\\_{\\boldsymbol{x}\\sim\\mathcal{D}\\_{\\mathcal{P}}}[f(\\boldsymbol{x})|f(\\boldsymbol{x})\\le \\eta\\_\\alpha(f)] &:= \\tilde{E}\\_+\\\\\\\\\n&b^* - \\tilde{a}^* &:= \\Delta \\tilde{E}\\\\\\\\\n& \\mathbb{E}\\_{\\boldsymbol{x}\\sim\\mathcal{D}\\_\\mathcal{P}}[(f(\\boldsymbol{x})-a)^2]& :={E}\\_a\\\\\\\\ \n& \\mathbb{E}\\_{\\boldsymbol{x}\\sim\\mathcal{D}\\_\\mathcal{P}}[(f(\\boldsymbol{x})-a)^2|\nf(\\boldsymbol{x})\\leq\\eta\\_\\alpha(f)]& :=\\tilde{E}\\_a\\\\\\\\ \n& \\mathbb{E}\\_{\\boldsymbol{x}'\\sim\\mathcal{D}\\_{\\mathcal{N}}}[(f(\\boldsymbol{x}')-b)^2|f(\\boldsymbol{x}')\\geq\\eta\\_{\\beta}(f)] &:= E\\_b \\\\\\\\\n&\\mathbb{E}\\_{\\boldsymbol{x}\\sim\\mathcal{D}\\_\\mathcal{P}}[f(\\boldsymbol{x})^2|\nf(\\boldsymbol{x})\\leq\\eta\\_\\alpha(f)] &:=E\\_{-,2}\\\\\\\\\n& \\mathbb{E}\\_{\\boldsymbol{x}'\\sim\\mathcal{D}\\_{\\mathcal{N}}}[f(\\boldsymbol{x}')^2|f(\\boldsymbol{x}')\\geq\\eta\\_{\\beta}(f)] &:=E\\_{+,2}\n\\end{aligned}\n$$\n\n> **Lemma A (The Reformulation for OPAUC)** For a **fixed** scoring function $f$, the following two problems shares the same optimum, given that the scoring function satisfies: $f(\\boldsymbol{x}) \\in [0,1],~ \\forall \\boldsymbol{x}$:\n$$\n\\begin{aligned}\n&\\boldsymbol{(OP1)} \\min\\_{(a,b)\\in[0,1]^2}\\max\\_{\\gamma \\in [-1,1]}\n\\mathbb{E}\\_{\\boldsymbol{x}\\sim\\mathcal{D}\\_\\mathcal{P}}[(f(\\boldsymbol{x})-a)^2] +\n\\mathbb{E}\\_{\\boldsymbol{x}'\\sim\\mathcal{D}\\_\\mathcal{N}}[(f(\\boldsymbol{x}')-b)^2|\nf(\\boldsymbol{x}')\\geq\\eta\\_\\beta(f)] \\\\\\\\\n+2\\Delta E + 2\\gamma \\Delta E-\\gamma^2\n\\\\\\\\\n&\\boldsymbol{(OP2)} \\min\\_{(a,b)\\in[0,1]^2}\\max\\_{\\gamma \\in [b-1,1]}\n\\mathbb{E}\\_{\\boldsymbol{x}\\sim\\mathcal{D}\\_\\mathcal{P}}[(f(\\boldsymbol{x})-a)^2] +\n\\mathbb{E}\\_{\\boldsymbol{x}'\\sim\\mathcal{D}\\_\\mathcal{N}}[(f(\\boldsymbol{x}')-b)^2|\nf(\\boldsymbol{x}')\\geq\\eta\\_\\beta(f)] \\\\\\\\\n+2\\Delta E + 2\\gamma \\Delta E-\\gamma^2\n\\end{aligned}\n$$\n\n\n> **Remark A** $\\boldsymbol{(OP1)}$ and $\\boldsymbol{(OP2)}$ have the equivalent formulation:\n$$\n\\begin{aligned}\n\\boldsymbol{(OP1)}\\Leftrightarrow \\min\\_{(a,b)\\in[0,1]\\^2}\\max\\_{\\gamma \\in [-1,1]} &\\mathbb{E}\\_{\\boldsymbol{z}\\sim\\mathcal{D}\\_{\\mathcal{Z}}}\\Big[\\left[(f(\\boldsymbol{x})-a)\\^2-\n2(1+\\gamma)f(\\boldsymbol{x})\\right]y/p\\\\\\\\\n& +\n\\left[(f(\\boldsymbol{x})-b)\\^2+2(1+\\gamma) f(\\boldsymbol{x})\\right]\\(1-y\\)/[\\beta(1-p)] -\\gamma\\^2\\Big].\n\\end{aligned}\n$$\n$$\n\\begin{aligned}\n\\boldsymbol{(OP2)}\\Leftrightarrow \\min\\_{(a,b)\\in[0,1]\\^2}\\max\\_{\\gamma \\in [b-1,1]}& \\mathbb{E}\\_{\\boldsymbol{z}\\sim\\mathcal{D}\\_{\\mathcal{Z}}}\\Big[\\left[(f(\\boldsymbol{x})-a)\\^2-\n2(1+\\gamma)f(\\boldsymbol{x})\\right]y/p\\\\\\\\\n& + \\left[(f(\\boldsymbol{x})-b)\\^2+2(1+\\gamma) f(\\boldsymbol{x})\\right]\\(1-y\\)/[\\beta(1-p)] -\\gamma\\^2\\Big].\n\\end{aligned}\n$$\n\n **PROOF**\nFrom the proof of our main paper, we know that $(OP1)$ has a closed-form minimum:\n$$\n E_{a^*} + {E}_{b^*} + (\\Delta E)^2 + 2 \\Delta E.\n$$\n\nHence, we only need to prove that $(OP2)$ has the same minimum solution.\n\nBy expanding $(OP2)$, we have:\n$$\n\\begin{aligned}\n\\min\\_{(a,b)\\in[0,1]\\^2}\\max\\_{\\gamma\\in[b-1,1]}\n\\mathbb{E}\\_{\\boldsymbol{z}\\sim\\mathcal{D}\\_{\\mathcal{Z}}}[\nF\\_{op}(f,a,b,\\gamma,\\eta\\_\\beta(f),\\boldsymbol{z})] &= \\\\\\\\\n2\\Delta E + \\min\\_{a \\in [0,1]}E\\_a + \\min\\_{b\\in[0,1]}\\max\\_{\\gamma \\in [b-1,1]} F\\_0\n\\end{aligned}\n$$\nwhere \n$$\nF_0:= E_b + 2\\gamma\\Delta E - (\\Delta E)^2\n$$\n\nObviously, since $a$ is decoupled with $b,\\gamma$, we have:\n$$\n\\min_{a \\in [0,1]} E_a = E_{a^*}\n$$\n\n\nNow, we solve the mini-max problem of $F_0$. For any fixed feasible $b$, the inner max problem is a truncated quadratic programming with a unique and closed-form solution. Hence, we first solve the inner maximization problem for fixed $b$, and then represent the mini-max problem as a minimization problem for $b$.\n\nSpecifically, we have:\n$$\n(\\max_{\\gamma\\in[b-1,1]} 2\\gamma \\Delta E-\\gamma^2) = \n\\begin{cases}\n(\\Delta E)^2, & \\Delta E \\ge b-1 \\\\\\\\\n2(b-1)\\Delta E - (b-1)^2, & \\text{otherwise}\n\\end{cases} \n$$\n", " **(A2 part 2)**\n\nThus, we have:\n$$\n\\min_{b\\in[0,1]} \\max_{\\gamma\\in[b-1,1]} F_0=\n\\min_{b \\in [0,1]} F_1\n$$\nwhere\n$$\nF_1 = \\begin{cases}\n&F_{1,0}(b) := E_b + (\\Delta E)^2, & b-1 \\le \\Delta E \\\\\\\\\n& F_{1,1}(b) := E_{-,2}- 2bE_- + 2b-1 + 2(b-1)\\Delta E, & \\text{otherwise}\n\\end{cases}\n$$\nIt is easy to see that both cases of $F_1$ are convex functions w.r.t $b$. So, we can find the global minimum by comparing the minimum of $F_{1,0}$ and $F_{1,1}$. \n\n>CASE 1: $\\Delta E \\ge b-1$. \n\nIt is easy to see that $b_0 = E_- \\in (-\\infty, 1+\\Delta E]$, by taking the derivative to zero, we have, the optimum value is obtained at $b= E_-$ for $F_{1,0}$.\n\n>CASE 2: $\\Delta E \\le b-1$. \n\nAgain by taking the derivative, we have:\n\n$$\nF_{1,1}(b)' = -2E_- + 2 + 2 \\Delta E = 2-2E_+ \\ge 0 \n$$\nWe must have:\n\n$$\\inf\\_{b > 1 + \\Delta E} F\\_{1,1}(b) \\ge F\\_{1,1}(1+\\Delta E) = F\\_{1,0}(1+\\Delta E) > F\\_{1,0}(E\\_-) =F\\_{1,0}(b\\^*)$$\n\n> Putting all together\n\nHence the global minimum of $F_1$ is obtained at $b^*$ with:\n$$\nF_1(b^*) = F_{1,0}(b^*) = E_{b^*}+ (\\Delta E)^2\n$$\n\nHence, we have $(OP2)$ has the minimum value:\n$$\n E_{a^*} + E_{b^*} + (\\Delta E)^2 + 2 \\Delta E\n$$\n\n### PROOF COMPLETED\n\nNow, we use a similar trick to prove the result for TPAUC:\n\n> **Lemma B (The Reformulation for TPAUC)** For a **fixed** scoring function $f$, the following two problems shares the same optimum, given that the scoring function satisfies: $f(\\boldsymbol{x}) \\in [0,1],~ \\forall \\boldsymbol{x}$:\n$$\n\\begin{aligned}\n&\\boldsymbol{(OP3)} \\min\\_{f,(a,b)\\in[0,1]\\^2}\\max\\_{\\gamma \\in [-1,1]}\n\\mathbb{E}\\_{\\boldsymbol{x}\\sim\\mathcal{D}\\_\\mathcal{P}}[(f(\\boldsymbol{x})-a)\\^2|\nf(\\boldsymbol{x})\\leq\\eta\\_\\alpha(f)]+\n\\mathbb{E}\\_{\\boldsymbol{x}'\\sim\\mathcal{D}\\_\\mathcal{N}}[(f(\\boldsymbol{x}')-b)\\^2|\nf(\\boldsymbol{x}')\\geq\\eta\\_\\beta(f)]\\\\\\\\\n+2\\Delta \\tilde{E} + 2\\gamma \\Delta \\tilde{E}-\\gamma\\^2\n\\\\\\\\\n&\\boldsymbol{(OP4)} \\min\\_{f,(a,b)\\in[0,1]\\^2}\\max\\_{\\gamma \\in [\\max\\\\{-a,b-1\\\\},1]}\n\\mathbb{E}\\_{\\boldsymbol{x}\\sim\\mathcal{D}\\_\\mathcal{P}}[(f(\\boldsymbol{x})-a)\\^2|\nf(\\boldsymbol{x})\\leq\\eta\\_\\alpha(f)]+\n\\mathbb{E}\\_{\\boldsymbol{x}'\\sim\\mathcal{D}\\_\\mathcal{N}}[(f(\\boldsymbol{x}')-b)\\^2|\nf(\\boldsymbol{x}')\\geq\\eta\\_\\beta(f)]\\\\\\\\\n +2\\Delta \\tilde{E} + 2\\gamma \\Delta \\tilde{E}-\\gamma\\^2\n\\end{aligned}\n$$\n\n> **Remark B** $\\boldsymbol{(OP3)}$ and $\\boldsymbol{(OP4)}$ have the equivalent formulation:\n$$\n\\begin{aligned}\n\\boldsymbol{(OP3)} \\Leftrightarrow\n\\min\\_{(a,b)\\in[0,1]\\^2}\\max\\_{\\gamma \\in [-1, 1]} &\\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}}\\Big[\\left[(f(\\boldsymbol{x})-a)\\^2-\n2(1+\\gamma)f(\\boldsymbol{x})\\right]y/(\\alpha p)\\\\\\\\\n& +\n\\left[(f(\\boldsymbol{x})-b)\\^2+2(1+\\gamma) f(\\boldsymbol{x})\\right]\\(1-y\\)/[\\beta(1-p)] -\\gamma\\^2\\Big].\n\\end{aligned}\n$$\n$$\n\\begin{aligned}\n\\boldsymbol{(OP4)} \\Leftrightarrow\n\\min\\_{(a,b)\\in[0,1]\\^2}\\max\\_{\\gamma \\in [\\max\\\\{-a,b-1\\\\} 1]} &\\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}}\\Big[\\left[(f(\\boldsymbol{x})-a)\\^2- \n2(1+\\gamma)f(\\boldsymbol{x})\\right]y/(\\alpha p)\\\\\\\\\n& +\n\\left[(f(\\boldsymbol{x})-b)\\^2+2(1+\\gamma) f(\\boldsymbol{x})\\right]\\(1-y\\)/[\\beta(1-p)] -\\gamma\\^2\\Big].\n\\end{aligned}\n$$\n\n##### **PROOF**\nAgain, $(OP3)$ has the minimum value:\n\n$$\\tilde{E}\\_{\\tilde{a}\\^*} + E _{b ^\\*} + \\(\\Delta \\tilde{E}\\)\\^2 + 2 \\\\Delta \\tilde{E}$$\n\nWe prove that $(OP4)$ ends up with the minimum value.\n \nBy expanding $(OP4)$, we have:\n$$\n\\begin{aligned}\n(OP4)= 2\\Delta \\tilde{E} + \\min\\_{(a,b)\\in[0,1]\\^2}\\max\\_{\\gamma \\in [\\max\\\\{-a,b-1\\\\},1]} F\\_3\n\\end{aligned}\n$$\nwhere \n$F\\_3:= \\tilde{E}\\_a+ E\\_{b} +2\\Delta \\tilde{E} + 2\\gamma \\Delta \\tilde{E}-\\gamma\\^2$\n\n For any fixed feasible $a,b$, the inner max problem is a truncated quadratic programming, which has a unique and closed-form solution. \n\nSpecifically, define $c = \\max\\\\{-a,b-1\\\\}$, we have:\n$$\n(\\max_{\\gamma \\in [c,1]} 2\\gamma \\Delta \\tilde{E}-\\gamma^2) = \n\\begin{cases}\n(\\Delta \\tilde{E})^2, & \\Delta \\tilde{E} \\ge c \\\\\\\\\n2c\\Delta \\tilde{E} - c^2, & \\text{otherwise}\n\\end{cases} \n$$\nThus, we have:\n$$\n\\min_{(a,b)\\in[0,1]^2} \\max_{\\gamma\\in[c,1]} F_3=\n\\min_{(a,b) \\in [0,1]} F_4\n$$\nwhere\n$$\nF\\_4 = \\begin{cases}\n&F\\_{4,0}(a,b) := \\tilde{E}\\_a + E\\_b + (\\Delta \\tilde{E})\\^2, & c \\le \\Delta \\tilde{E} \\\\\\\\\n& F\\_{4,1}(a,b) := \\tilde{E}\\_a + E\\_{-,2} -2b E\\_- + 2(b-1) \\Delta \\tilde{E} + 2b -1 , & b-1 \\ge \\Delta \\tilde{E},~ -a \\le b-1 \\\\\\\\\n& F\\_{4,2}(a,b) := E\\_b + E\\_{+,2} -2a \\tilde{E}\\_+ -a \\Delta \\tilde{E}, & -a \\ge \\Delta \\tilde{E},~ b-1 \\le -a\n\\end{cases}\n$$\n\nIt is easy to see that both cases of $F_4$ are convex functions w.r.t $b$. So, we can find the global minimum by comparing the minimum of $F\\_{4,0}$,$F\\_{4,1}$ and $F\\_{4,2}$. \n\n", " **(A2 part 3)**\n\n>CASE 1: $\\Delta \\tilde{E} \\ge \\max\\{-a,b-1\\}$. \n\nIt is easy to check that when $a = \\tilde{E}\\_+, b = E\\_- $, we have $-a \\le \\Delta \\tilde{E}$ and $b-1 \\le \\Delta \\tilde{E}$. It is easy to see that $a,b$ are decoupled in the expression of $F\\_{4,0}(a,b)$. By setting:\n$$\n\\begin{aligned}\n\\frac{\\partial F_{4,0}(a,b)}{\\partial a} = 0, \\\\ \n\\frac{\\partial F_{4,0}(a,b)}{\\partial b} = 0\n\\end{aligned}\n$$\nWe know that the minimum solution is attained at $a= \\tilde{a}^*$, $b= b^*$. Then the minimum value of $F\\_{4,0}(a,b)$ at this range becomes:\n\n$$\n\\tilde{E}\\_{\\tilde{a}^*} + E\\_{b^*} + (\\Delta \\tilde{E})^2\n$$\n\nMoreover, we will also use the fact that **$\\tilde{E}\\_{\\tilde{a}^\\*}$ and $E\\_{b^\\*}$ are also the global minimum for $E\\_a$ and $E\\_b$, respectively**. \n\n>CASE 2: $b-1 \\ge \\Delta \\tilde{E},~ -a \\le b-1$.\n\nIt is easy to see that $E\\_a \\ge E\\_{\\tilde{a}^*}$ in this case. According to the same derivation as in Lemma A case 2, we have:\n$$\nE_{-,2} -2b E_- + 2(b-1) \\Delta \\tilde{E} + 2b -1 \\ge E_{b^*} + (\\Delta \\tilde{E})^2\n$$\nholds when $b -1 \\le \\Delta \\tilde{E}$. \n\nRecall that CASE 2 is include in the condition $b -1 \\le \\Delta \\tilde{E}$. So, under the condition of CASE 2:\n$$\nF\\_{4,1}(a,b) \\ge \\tilde{E}\\_{\\tilde{a}^*} + E\\_{b^*} + (\\Delta \\tilde{E})^2\n$$\n\n>CASE 3: $-a \\ge \\Delta \\tilde{E},~ b-1 \\le -a$.\n\nIn this case, we have $E_b \\ge E_{b^*}$. It remains to check:\n$$\ng(a) = -2a \\tilde{E}_+ -a \\Delta \\tilde{E}\n$$\nBy taking derivative, we have:\n\n$$\ng'(a) = -2\\tilde{E}\\_+ - \\Delta \\tilde{E} = -\\tilde{E}\\_+ -E\\_+ \\le 0. \n$$\n\nSimilar as the proof of CASE 2, when $-a \\ge \\Delta \\tilde{E}$, we have:\n$$\ng(a) \\ge \\tilde{E}\\_{\\tilde{a}^*} + (\\Delta \\tilde{E})^2\n$$\nand thus\n$$\nF\\_{4,2}(a,b) \\ge \\tilde{E}\\_{\\tilde{a}^*} + E\\_{b^*} + (\\Delta \\tilde{E})^2\n$$\nholds.\n\nSince the condition of CASE 3 is included in the set $-a \\ge \\Delta \\tilde{E}$:\n$$\nF\\_{4,2}(a,b) \\ge \\tilde{E}\\_{\\tilde{a}^*} + E\\_{b^*} + (\\Delta \\tilde{E})^2\n$$\nholds under the condition of CASE 3.\n\n> Putting altogether:\n\nThe minimum value of $(OP4)$ reads:\n$$\n\\tilde{E}\\_{\\tilde{a}^*} + E\\_{b^*} + (\\Delta \\tilde{E})^2 + 2\\Delta \\tilde{E}\n$$\nwhich is the same as $(OP3)$.\n\n##### **PROOF COMPLETED**\n\nFinally, since for each fixed $f$ $(OP3) = (OP4)$, and $(OP1) = (OP2)$ . We can then claim the following theorem:\n\n> Theorem A (Constrainted Reformulation) \n$\\min_f (OP1) = \\min_f (OP2), ~~ \\min_f (OP3) = \\min_f (OP4)$\n\n> **Remark C** Since the calculation is irelevant to the definition of the expectation, the replace the population-level expectation with the empirical expectation over the training data.\n\n> **Remark D** By applying Theorem 1, we can get the reformulation result in Theorem 2: Note that we have also corrected the problem raised by Reviewer jtnh.\n> - for $\\mathrm{OPAUC}$\n $$\\min\\_{f,(a,b)\\in[0,1]\\^2} \\max\\_{\\gamma\\in [b-1,1]}\\min\\_{s'\\in\\Omega\\_{s'}}\\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}} [G\\_{op}(f,a,b,\\gamma,\\boldsymbol{z},s')]$$\n where\n $$\\begin{aligned}\n G\\_{op}(f,a,b,\\gamma,\\boldsymbol{z},s')&=[(f(\\boldsymbol{x})-a)\\^2-\n 2(1+\\gamma)f(\\boldsymbol{x})]y/p\\\\\\\\\n & +\n \\left(\\beta s' +\\left[(f(\\boldsymbol{x})-b)\\^2+2(1+\\gamma) f(\\boldsymbol{x})-s'\\right]\\_+\\right)(1-y)/[\\beta (1-p)] -\\gamma\\^2.\n \\end{aligned}\n $$\n> - for $\\mathrm{TPAUC}$ \n $$\\min\\_{f,(a,b)\\in[0,1]\\^2 } \\max\\_{\\gamma\\in [\\max\\\\{-a,b-1\\\\}]}\\min\\_{s\\in\\Omega\\_{s},s'\\in\\Omega\\_{s'}}\\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}} [G\\_{tp}(f,a,b,\\gamma,\\boldsymbol{z},s,s')]$$\n where\n $$\\begin{aligned}\n G\\_{tp}(f,a,b,\\gamma,\\boldsymbol{z},s,s')&=\\left(\\alpha s + r\\_{\\kappa}\\left((f(\\boldsymbol{x})-a)\\^2-\n 2(1+\\gamma)f(\\boldsymbol{x})-s\\right)\\right)y/(\\alpha p) -\\gamma\\^2\\\\\\\\\n & +\n \\left(\\beta s' +r\\_{\\kappa}\\left((f(\\boldsymbol{x})-b)\\^2+2(1+\\gamma) f(\\boldsymbol{x})-s'\\right)\\right)(1-y)/[\\beta (1-p)].\n \\end{aligned}\n $$", " > **(Q3)** The problem of Theorem 3\n\n##### REFERENCE\n\n> [A] Tsaknakis, Ioannis, Mingyi Hong, and Shuzhong Zhang. \"Minimax problems with coupled linear constraints: computational complexity, duality and solution methods.\" arXiv:2110.11210 (2021).\n\n**(A3)** Thank you so much for posing such a key issue! We have realized the problem. However, according to the recent work of **Ref.[A]**. We also reformulated it as an off-the-shelf mini-max problem where the coupled constraint is replaced with the Lagrange multipliers ($\\theta\\_b$ for OPAUC, $\\theta\\_b, \\theta\\_a$ for TPAUC). Specifically, by Thm.2 in Ref.[A], we have, for OPAUC:\n\n$$\n\\begin{aligned}\n{\\min\\_{f,(a,b)\\in[0,1]\\^2,s\\in\\Omega\\_{s}}\\max\\_{\\gamma\\in[b-1,1]} \\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}}[G\\_{op}\\^{\\kappa,\\omega}]}=\\min\\_{f,(a,b)\\in[0,1]\\^2,s\\in\\Omega\\_{s},\\theta\\_b \\ge 0}\\max\\_{\\gamma\\in[-1,1]} \\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}}[G\\_{op}\\^{\\kappa,\\omega}] - \\theta\\_{b}(b-1-\\gamma),\n\\end{aligned}\n$$\n\nand for $\\mathrm{TPAUC}$:\n$$\n{\\min\\_{f,(a,b)\\in[0,1]\\^2,s\\in\\Omega\\_{s},s'\\in\\Omega\\_{s'}}~~\\max\\_{\\gamma\\in[\\max\\\\{-a,b-1\\\\},1]} \\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}}[G\\_{tp}\\^{\\kappa,\\omega}]}= \\min\\_{f,(a,b)\\in[0,1]\\^2,s\\in\\Omega\\_{s},s'\\in\\Omega\\_{s'},\\theta\\_a \\ge 0,\\theta\\_b \\ge 0}\\max\\_{\\gamma\\in[-1,1]} \\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}}[G\\_{tp}\\^{\\kappa,\\omega}] \\\\\\\\- \\theta\\_{b}(b-1-\\gamma)- \\theta\\_{a}(-a-\\gamma)\n$$\n\nMoreover, since the original loss function is bounded from below/above, it is easy to show that the optimal solution for the multipliers $\\theta^*_b, \\theta^*_a$ satisfies $\\theta^*_b \\le \\infty, \\theta^*_a \\le \\infty$. Hence, there must exist a sufficiently large $M>0$, such that:\n\n$$\n{\\min\\_{f,(a,b)\\in[0,1]\\^2,s\\in\\Omega\\_{s}}\\max\\_{\\gamma\\in[b-1,1]} \\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}}[G\\_{op}\\^{\\kappa,\\omega}]}=\\min\\_{f,(a,b)\\in[0,1]\\^2,s\\in\\Omega\\_{s},\\theta\\_b \\in [0,M\\_1]}\\max\\_{\\gamma\\in[-1,1]} \\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}}[G\\_{op}\\^{\\kappa,\\omega}] - \\theta\\_{b}(b-1-\\gamma),\n$$\nand for $\\mathrm{TPAUC}$:\n$$\n{\\min\\_{f,(a,b)\\in[0,1]\\^2,s\\in\\Omega\\_{s}}~~\\max\\_{\\gamma\\in[\\max\\\\{-a,b-1\\\\},1]} \\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}}[G\\_{tp}\\^{\\kappa,\\omega}]}= \\min\\_{f,(a,b)\\in[0,1]\\^2,s\\in\\Omega\\_{s},s'\\in\\Omega\\_{s'},\\theta\\_a \\in [0,M\\_2],\\theta\\_b \\in [0,M\\_3]}~~~\\max\\_{\\gamma\\in[-1,1]} \\mathbb{E}\\_{\\boldsymbol{z}\\sim \\mathcal{D}\\_\\mathcal{Z}}[G\\_{tp}\\^{\\kappa,\\omega}] \\\\\\\\- \\theta\\_{b}(b-1-\\gamma)- \\theta\\_{a}(-a-\\gamma)\n$$\n\nHere, to make sure that $M_1,M_2,M_3$ are large enough, we set them as $M_1 = M_2 =M_3 =10^9$. \n\n\nBased on the correction of $(OP2)$ and $(OP4)$, we conduct the experiments again. Here is the result (The highest and the second best results are bolded and italic, respectively), which shows that our algorithm still outperforms the competitors for the following OPAUC and TPAUC metrics.\n\nOPAUC($\\mathrm{FPR}\\leq0.3$)\n|methods|cifar-10-1|cifar-10-2|cifar-10-3|cifar-100-1|cifar-100-2|cifar-100-3|tniy-im200-1|tiny-im200-2|tiny-200-3|\n|--------|----------|----------|----------|-----------|-----------|-----------|------------|------------|----------|\n|SOPA|*0.7659*|*0.9688*|**0.7651**|*0.9108*|*0.9875*|0.8483|0.8157|0.9037|0.9066|\n|SOPA-S|0.7548|0.9674|*0.7542*|0.9033|0.9860|0.8449|0.8180|*0.9087*|0.9095|\n|AGD-SBCD|0.7526|0.9615|0.7497|0.9105|0.9814|0.8406|0.8135|0.9081|0.9057|\n|AUC-poly|0.7542|0.9672|0.7538|0.9027|0.9859|0.8441|0.8185|0.9084|*0.9100*|\n|AUC-exp|0.7347|0.9620|0.7457|0.8987|0.9850|0.8407|0.8127|0.9026|0.9049|\n|CE|0.7417|0.9431|0.7428|0.8903|0.9695|0.8321|0.8023|0.8917|0.8878|\n|MB|0.7492|0.9648|0.7500|0.9003|0.9804|**0.8575**|*0.8193*|0.9072|0.9091|\n|AUC-M|0.7334|0.9609|0.7442|0.8996|0.9845|0.8403|0.8102|0.9011|0.9043|\n|PAUCI|**0.7721**|**0.9716**|0.7399|**0.9155**|**0.9889**|*0.8492*|**0.8267**|**0.9214**|**0.9217**|\n\nTPAUC($\\mathrm{FPR}\\leq0.5,\\mathrm{TPR}\\geq0.5$)\n|methods|cifar-10-1|cifar-10-2|cifar-10-3|cifar-100-1|cifar-100-2|cifar-100-3|tniy-im200-1|tiny-im200-2|tiny-200-3|\n|--------|----------|----------|----------|-----------|-----------|-----------|------------|------------|----------|\n|SOPA|*0.7096*|*0.9593*|*0.7220*|*0.8714*|*0.9855*|0.7485|*0.7417*|*0.8681*|*0.8650*|\n|SOPA-S|0.6603|0.9456|0.6917|0.8617|0.9812|0.7419|0.7354|0.8666|0.8628|\n|AUC-poly|0.6804|0.9543|0.6974|0.8618|0.9835|0.7431|0.7349|0.8676|0.8627|\n|AUC-exp|0.6804|0.9493|0.6930|0.8613|0.9827|0.7447|0.7328|0.8672|0.8626|\n|CE|0.6420|0.9353|0.6798|0.8467|0.9603|0.7311|0.7223|0.8517|0.8598|\n|MB|0.6437|0.9492|0.6913|0.8665|0.9677|**0.7583**|0.7348|0.8651|0.8624|\n|AUC-M|0.6520|0.9381|0.6821|0.8505|0.9822|0.7324|0.7361|0.8517|0.8598|\n|PAUCI|**0.7192**|**0.9663**|**0.7305**|**0.8814**|**0.9874**|*0.7497*|**0.7618**|**0.8875**|**0.8860**|", " > **(Q4)** There are some problems with Table 1 and the comparison in Table 1 is unfair.\n\n **(A4)**: We have carefully revised table 1. The new version is attached as follows:\n\n>**Table 1** Comparison with existing partial AUC algorithms. The convergence rate represents the number of iterations after which an algorithm can find an $\\epsilon$-stationary point, where $\\epsilon$-sp is $\\epsilon$-stationary point and. $\\triangle$ implies a natural result of non-convex SGD. $n_+^B$($n_-^B$ resp.) is the number of positive (negative resp.) instances for each mini-batch $B$. $\\kappa \\rightarrow \\infty, \\omega \\rightarrow 0$ implies that our method is unbiased under such a asymptotic condition.\n\n||SOPA|SOPA-S|TPAUC|Ours|\n|-------------------------------|--------------------|--------------------|--------------------------------|--------------------------------------------------------------------------------|\n|ConvergenceRate(OPAUC)|$O(\\epsilon^{-4})$|$O(\\epsilon^{-4})$|$O(\\epsilon^{-4})^{\\triangle}$|$O(\\epsilon^{-3})$|\n|ConvergenceRate(TPAUC)|$O(\\epsilon^{-6})$|$O(\\epsilon^{-4})$|$O(\\epsilon^{-4})^{\\triangle}$|$O(\\epsilon^{-3})$|\n|ConvergenceMeasure|$\\epsilon$-sp(non-smooth)|$\\epsilon$-sp|$\\epsilon$-sp|$\\epsilon$-sp|\n|Smoothness|non-smooth|smooth|smooth|smooth|\n|OPAUC&TPAUC|$\\surd$|$\\surd$|$\\surd$|$\\surd$|\n|Unbiased|$\\surd$|X|X|$\\begin{aligned}\\kappa\\rightarrow\\infty\\\\\\\\ \\omega\\rightarrow0\\end{aligned}$|\n|Per-IteratoinTimeComplexity|$O(n_+^Bn_-^B)$|$O(n_+^Bn_-^B)$|$O(n_+^Bn_-^B)$|$O(n_+^B+n_-^B)$|\n\n\n\n> **(Q4.1)** $n_+$ representing the total number of positive instances?\n\n **(A4.1)** We are sorry about the unclear notation. Now we use $n^B_+, and ~n^B_-$ to represent the number of positive/negative instances in each mini-batch.\n\n> **(Q4.2)** The names of algorithms in Table 1 are not consistent with reference paper. \n\n **(A4.2)** We are sorry about the inconsistent algorithm names. We have modified their names to avoid inconsistency.\n\n> **(Q4.3)** The comparison with [43] in Table 1 is unfair. \n\n **(A4.3)** Thank you for your correction. To make it fair, we have included more information in table 1.\n\n> **(Q4.4)** The comparison with [39] in Table 1 is unfair. \n\n **(A4.4)** Thank you for your correction. We won't comparison with [39] in the table. Now table 1 only contains algorithms which can optimize both OPAUC and TPAUC. Moreover, we include the empirical comparisons on OPAUC metrics within a fixed range (please see the answer for **(Q5)** below).\n\n> **(Q4.5)** The statement about the convergence for [38] is wrong. \n\n **(A4.5)** Thank you for your correction. The right convergence rate should be $O(\\epsilon^{-4})$. We will fix it in the next version.\n\n\n\n> **(Q5)** Concerning the numerical result comparison with [39].\n\n\n **(A5)** Thank you for your suggestion. We conduct the following experiments and restrict algorithms within a fixed FPR range. All experiments are conducted on an Ubuntu 16.04.1 server with an Intel(R) Xeon(R) Silver 4110 CPU and four RTX 3090 GPUS. For AGD-SBCD, all parameters are tuned by following the [39]'s experiments. Other experiment setups are the same as our paper. \n\n> **Performance Experiment**\nThe results show that our algorithm can reach comparable performance with AGD-SBCD in this case. \n\nOPAUC ($0.1 \\leq \\mathrm{FPR}\\leq 0.3$)\n| methods | cifar-10-1 | cifar-10-2 | cifar-10-3 | cifar-100-1 | cifar-100-2 | cifar-100-3 | tniy-im200-1 | tiny-im200-2 | tiny-200-3 |\n| -------- | ---------- | ---------- | ---------- | ----------- | ----------- | ----------- | ------------ | ------------ | ---------- |\n| AGD-SBCD | 0.8482 | **0.9932** | 0.8510 | **0.9565** | **0.9885** | 0.8859 | 0.9147 | 0.9611 | **0.9666** |\n| PAUCI | **0.8560** | 0.9907 | **0.8615** | 0.9517 | 0.9863 | **0.8918** | **0.9231** | **0.9673** | 0.9542 |\n\n\n\n>### **Convergence Experiment**\n\nWe show the convergence comparison here in the following table on the training data. It could be seen that our algorithm start to enter a stable state after epoch 30, while AGD-SBCD starts to enter a stable state after epoch 40. \n\n\nOPAUC ($0.1 \\leq \\mathrm{FPR}\\leq 0.3$, cifar-10-1)\n| methods | epochs:5 | epochs:10 | epochs:15 | epochs:20 | epochs:25 | epochs:30 | epochs:35 | epochs:40 | epochs:45 | epochs:50 |\n| -------- | ---------- | ---------- | ---------- | ----------- | ----------- | ----------- | ------------ | ------------ | ---------- | ---------- |\n| AGD-SBCD | 0.8493 | 0.8777 | 0.8894 | 0.9011 | 0.8944 | 0.8990 | 0.9033 | 0.9073 | 0.9060 | 0.9080 |\n| PAUCI | 0.8624 | 0.8853 | 0.8933 | 0.9003 | 0.9089 | 0.9115 | 0.9126 | 0.9104 | 0.9139 | 0.9127 |\n\n", " Thank you so much for your careful and valuable feedbacks! The responses are as follows:\n\n> **(Q1)**: Taking a look at Appendix E, I see that there is $p=\\mathbb{P}(y=1)$, but this doesn't seem defined when $\\mathcal{D}_{\\mathcal{Z}}$ is; furthermore, this parameter $p$ exists in the Thm.1 statement in the appendix, but not in the main text. Am I supposed to implicitly take it to equal 1/2?\n\n **(A1)**: We are sorry for the notation typo. $p$ should exist in Thm.1 in the main text. \n\n> **(Q2)**: Having looked at Appendix E, I think there may be some factors missing in $F_{op}$?\n\n **(A2)**: Thank you for your correction. There are indeed some factors missing in $F_{op}$. Actually, $1/\\beta$ should appear for all the terms involving the conditional expectation. We will fix this issue in the next version. The correct definition for $\\mathrm{OPAUC}$ should be:\n\n$\n\\min\\_{(a,b)\\in[0,1]\\^2}\\max\\_{\\gamma\\in[-1,1]} \\underset{\\boldsymbol{z}\\sim\\mathcal{D}\\_{\\mathcal{Z}}}{\\mathbb{E}} [F\\_{op}(f,a,b,\\gamma,\\eta\\_{\\beta}(f),\\boldsymbol{z})],\n$\n\nwhere\n\n$\n\\begin{aligned}\nF\\_{op}(f,a,b,\\gamma,\\eta\\_{\\beta}(f),\\boldsymbol{z})&=(f(\\boldsymbol{x})-a)^2y/p+\n(f(\\boldsymbol{x})-b)^2(1-y)\\mathbb{I}\\_{f(\\boldsymbol{x})\\geq\\eta\\_{\\beta}(f)}/[\\beta(1-p)]\\\\\\\\\n&\\quad+2(1+\\gamma)f(\\boldsymbol{x})(1-y)\\mathbb{I}\\_{f(\\boldsymbol{x})\\geq\\eta\\_{\\beta}(f)}/[\\beta(1-p)] - \n2(1+\\gamma)f(\\boldsymbol{x})y/p -\\gamma^2.\n\\end{aligned}\n$\n\nThis is the right expression which equals to\n\n$\n\\begin{aligned}\n&\\min\\_{(a,b)\\in[0,1]^2}\\max\\_{\\gamma\\in[-1,1]} \\underset{\\boldsymbol{x}\\sim\\mathcal{D}\\_{\\mathcal{P}}}{\\mathbb{E}} [(f(\\boldsymbol{x})-a)^2-2(\\gamma+1)f(\\boldsymbol{x})] -\\gamma^2 \\\\\\\\\n&+ \\underset{\\boldsymbol{x}'\\sim\\mathcal{D}\\_{\\mathcal{N}}}{\\mathbb{E}} [(f(\\boldsymbol{x}')-b)^2+2(\\gamma+1)f(\\boldsymbol{x}')|f(\\boldsymbol{x}')\\geq \\eta\\_{\\beta}(f)].\n\\end{aligned}\n$\n\nWe also correct the expressions in Thm.1-3, including TPAUC.\n\nTo correct the experiments, we conduct the experiments with consideration of factors missing $\\beta$. Here is the result (The highest and the second-best results are bolded and italic, respectively), which shows that our algorithm still outperforms the competitors for the following OPAUC and TPAUC metrics.\n\n> **OPAUC ($\\mathrm{FPR}\\leq 0.3$)**\n\n|methods|cifar-10-1|cifar-10-2|cifar-10-3|cifar-100-1|cifar-100-2|cifar-100-3|tiny-im200-1|tiny-im200-2|tiny-200-3|\n|--------|----------|----------|----------|-----------|-----------|-----------|------------|------------|----------|\n|SOPA|*0.7659*|*0.9688*|**0.7651**|*0.9108*|*0.9875*|0.8483|0.8157|0.9037|0.9066|\n|SOPA-S|0.7548|0.9674|*0.7542*|0.9033|0.9860|0.8449|0.8180|*0.9087*|0.9095|\n|AGD-SBCD|0.7526|0.9615|0.7497|0.9105|0.9814|0.8406|0.8135|0.9081|0.9057|\n|AUC-poly|0.7542|0.9672|0.7538|0.9027|0.9859|0.8441|0.8185|0.9084|*0.9100*|\n|AUC-exp|0.7347|0.9620|0.7457|0.8987|0.9850|0.8407|0.8127|0.9026|0.9049|\n|CE|0.7417|0.9431|0.7428|0.8903|0.9695|0.8321|0.8023|0.8917|0.8878|\n|MB|0.7492|0.9648|0.7500|0.9003|0.9804|**0.8575**|*0.8193*|0.9072|0.9091|\n|AUC-M|0.7334|0.9609|0.7442|0.8996|0.9845|0.8403|0.8102|0.9011|0.9043|\n|PAUCI|**0.7721**|**0.9716**|0.7399|**0.9155**|**0.9889**|*0.8492*|**0.8267**|**0.9214**|**0.9217**|\n\n>**TPAUC ($\\mathrm{FPR}\\leq 0.5, \\mathrm{TPR}\\geq 0.5$)**\n\n|methods|cifar-10-1|cifar-10-2|cifar-10-3|cifar-100-1|cifar-100-2|cifar-100-3|tniy-im200-1|tiny-im200-2|tiny-200-3|\n|--------|----------|----------|----------|-----------|-----------|-----------|------------|------------|----------|\n|SOPA|*0.7096*|*0.9593*|*0.7220*|*0.8714*|*0.9855*|0.7485|*0.7417*|*0.8681*|*0.8650*|\n|SOPA-S|0.6603|0.9456|0.6917|0.8617|0.9812|0.7419|0.7354|0.8666|0.8628|\n|AUC-poly|0.6804|0.9543|0.6974|0.8618|0.9835|0.7431|0.7349|0.8676|0.8627|\n|AUC-exp|0.6804|0.949|0.6930|0.8613|0.9827|0.7447|0.7328|0.8672|0.8626|\n|CE|0.6420|0.9353|0.6798|0.8467|0.9603|0.7311|0.7223|0.8517|0.8598|\n|MB|0.6437|0.9492|0.6913|0.8665|0.9677|**0.7583**|0.7348|0.8651|0.8624|\n|AUC-M|0.6520|0.9381|0.6821|0.8505|0.9822|0.7324|0.7361|0.8517|0.8598|\n|PAUCI|**0.7192**|**0.9663**|**0.7305**|**0.8814**|**0.9874**|*0.7497*|**0.7618**|**0.8875**|**0.8860**|\n\n> **(Q3)**: Based on this response, is it correct to say that with regularization, it isn't necessarily known whether or not you still have asymptotic unbiasedness?\n\n **(A3)**: Thank you for your question. The answer is Yes. The regularization might lead to extra bias. However, a regularization scheme is a popular trick for machine learning methods. As a very general result, it will inevitably induce bias. However, it is known to be a necessary building block to stabilize the solutions and improve generalization performance. ", " Thank you for all the responses! I have a couple remaining questions/concerns:\n\n**Regarding A1**\nTaking a look at Appendix E, I see that there is $p=\\mathbb{P}(y=1)$, but this doesn't seem defined when $\\mathcal{D}\\_\\mathcal{Z}$ is; furthermore, this parameter $p$ exists in the Thm 1 statement in the appendix, but not in the main text. Am I supposed to implicitly take it to equal 1/2?\n\n**Regarding A3**\nHaving looked at Appendix E, I think there may be some factors missing in $F\\_{op}$?\n\nFrom Eq. 57,\n$$F\\_{op} = (f(x)-a)^2y/p + (f(x)-b)^2(1-y)/(1-p)I\\\\{f(x) \\geq \\eta\\_\\beta(f)\\\\} + 2(1+\\gamma)f(x)(1-y)/(1-p)I\\\\{f(x) \\geq \\eta\\_\\beta(f)\\\\} - 2(1+\\gamma)f(x)y/p - \\gamma^2$$\n\nTaking expectation wrt $z$, we get\n$$\\mathbb{E}\\_{x \\sim D\\_P}[(f(x)-a)^2] + \\beta \\mathbb{E}\\_{x \\sim D\\_N}[(f(x)-b)^2] + 2\\beta(1+\\gamma)\\mathbb{E}\\_{x \\sim D\\_N}[f(x)] + 2(1+\\gamma)\\mathbb{E}\\_{x \\sim D\\_P}[f(x)] - \\gamma^2$$\n\nHowever, based on the preceding discussion, we should also have factors of $\\beta$ on the first, fourth, and fifth terms. Am I missing something somewhere? Also, it's not clear what setting of $p$ recovers the statement of Theorem 1 in the main text; the most likely candidate would be $p=1/2$ (and then dividing the objective by half), but that would yield an extra factor of $2$ on the $\\gamma^2$ term.\n\n**Regarding A7**\nBased on this response, is it correct to say that with regularization, it isn't necessarily known whether or not you still have asymptotic unbiasedness?", " To Authors, all Reviewers and the AC. \n\nThank you for your replies. After reading your feedback and the paper again, I still have some concerns and questions.\n\n**1. Theorem 3 is incorrect and the convergence cannot be guaranteed.**\n\nIn **Theorem 1** and **Theorem 2**, the constraint set $\\Omega_\\gamma$ is $[b-1, 1]$ which depends on $b$, a decision variable of the outer minimization problem. This is a very big issue. First, for a given $b$, the optimal $\\gamma$ for the inner maximization depends on $b$. As a result, $b^*$ in the outer optimal solution is not guaranteed to be (53). Second, Algorithm 1 cannot be applied to a min-max problem where the constraint of inner maximization depends on the decision variables of the outer maximization problem. The convergence result in **Theorem 3** (Theorem 9 [15]) also fail because of that. I don't think the authors can easily fix this issue because a min-max problem with coupling constraints is a fundamentally more difficult in optimization, and [15] is not the right method to apply. Moreover, it seems restricting $\\gamma$ in $[b-1, 1]$ is the key to ensure **Proposition 1** (monotonicity of $\\ell_1(x')$), which is needed to develop **Theorem 2**. Given that restricting $\\gamma$ in $[b-1, 1]$ is not allowed, **Theorem 2** is also not meaningful neither. The same problems also happen to two-way partial AUC maximization (**Proposition 2** and **Theorem 6**). Hence, the entire paper is wrong. I want to draw all reviewers' attention to this issue. \n\n**2. There are some problems with Table 1 and the comparison in Table 1 is unfair.**\n\n(1) In the line of \"Per-Iteration Time Complexity\", all of the algorithms listed in *Table 1* are stochastic algorithms. Would you please explain the reason that stochastic algorithms have $O(n\\_+n\\_-)$ per-iteration complexity with $n\\_+$ representing the total number of positive instances?\n\n(2) The names of algorithms in Table 1 are not consistent with reference paper. It is confused to use different names. In [43], the algorithm for one way partial AUC is called **SOPA** or **SOPA-S** and the algorithm for two way partial AUC is called **SOTA-S**. In [39], the algorithm is called **AGD-SBCD**. \n\n(3) The comparison with [43] in Table 1 is unfair. For [43], the convergence rate is developed for finding a stationary point of the original problem with $[\\cdot]\\_+$, which is **non-smooth**. However, in this paper, the convergence rate is for finding a stationary point of a **smoothed** problem. The smoothed problem in general may **not** have the same stationary point or optimal solution as the original non-smooth problem, so $\\epsilon$ has different meanings in [43] and this paper. Hence, the convergence rates of these two algorithms are not comparable.\n\n(4) The comparison with [39] in Table 1 is unfair. [39] is focused on solving partial AUC when FPR is in a fixed range. The fixed range partial AUC cannot be simply classified as OPAUC. However, it is unclear the proposed algorithm can be applied to optimizing pAUC with FPR in a range $[\\alpha, \\beta]$. Hence, it is unfair to compare with [39] in Table 1.\n\n(5) The statement about the convergence for [38] is wrong. Given that [38] transfers the problem into standard minimization problems, hence, for general non-convex smooth optimization under bounded variance condition SGD has a complexity of $O(1/\\epsilon^4)$ and under Lipschitz stochastic gradient condition variance reduced methods (e.g., STORM used in the paper) has a complexity of $O(1/\\epsilon^3)$. It is not clear why the authors claim the complexity of $O(1/\\epsilon^2)$ for [38]. \n\n**4. Concerning the numerical result comparison with [39].**\n\nThank you for implementing numerical comparison with [39]. However, [39] is designed for partial AUC **in a fixed range**, not simply OPAUC. Hence, it would be more reasonable to implement partial AUC in a fixed range. Moreover, the table shows that the algorithm in this paper can achieve better testing performance, which does not mean that this algorithm can converge faster. Convergence curves are preferable.\n\n**5. Formulations (8), (14), (22) are wrong. All of them miss components $p$ and $1-p$.**\n\nGiven the above reasons, I will lower my score to 1 unless the authors can address the issues including the fatal error. ", " Thank you for your valuable comments. The replies are attached below.\n\n> **(Q1)** In the Notation section, is $\\mathcal{D} _{\\mathcal{Z}}$ understood to be a mixture of $\\mathcal{D} _{\\mathcal{P}}\\times 1$ and $\\mathcal{D} _{\\mathcal{N}}\\times 0$? If so, at what mixture proportion?\n\n**(A1):** We are sorry for the unclear notation. The definition of $\\mathcal{D} _{\\mathcal{Z}}$ should be:\n\n$$\n\\begin{array}{lll}\n\\underset{z}{\\mathbb{P}}(\\cdot)= & \\mathbb{P} _{x \\mid y=1}(\\cdot) \\cdot \\mathbb{P}(y=1) +&\\mathbb{P} _{x \\mid y=0}(\\cdot) \\cdot \\mathbb{P}(y=0). \\\\\\\\\n\\downarrow&\\downarrow&\\downarrow\\\\\\\\\n\\mathcal{D} _{\\mathcal{Z}} & \\mathcal{D} _{\\mathcal{P}} & \\mathcal{D} _{\\mathcal{N}}\n\\end{array}\n$$\n\nIn other words, it is the joint distribution of the feature $x$ and label $y$. \n\n> **(Q2)** In line 100, it is said that \"Note that (3) and (5) are hard to optimize since it is complicated to determine the positive quantile $\\eta _{\\alpha}(f)$ and the negative quantile $\\eta _{\\beta}(f)$.\" This may be a silly question, but can't you just sort to find the largest $n _+^\\alpha$ and $n _{-}^\\beta$ scores? This would be like $O(nlogn)$ time, which wouldn't be much worse than the $O(n)$ time.\n\n**(A2)**: Thank you so much for your nice question! We agree with you that ranking has a nearly linear complexity. However, the sorting operation is not differentiable since the operation is not continuous. Hence, its convergence guarantee is hard to obtain. Moreover, even if we want to calculate an estimated gradient, we have to use the full-batch data. These two factors together make it almost impossible to optimize the original objective function based on off-the-shelf deep learning tools.\n\n> **(Q3)** Can some intuition be provided behind $F_{op}$; i.e., what roles do all the introduced variables play? Does the equivalence of (9) hold for each $f$? That is, is it true that $\\mathcal{R} _\\beta(f)=\\min _{a,b}\\max _\\gamma\\mathbb{E} _z[F _{op}(...)]$?\n\n**(A3 Part 1)**: The functionality of the reformulation is two-fold:\n\n1. To reformulate the pairwise objective function into an **instance-wise** form, so that the optimization problem could be solved with the off-the-shelf tools.\n\n2. To reformulate the partial ranking with a differentiable optimization problem, so that we can calculate the overall gradient. \n\nIn this sense, we employ four auxiliary variables in the reformulation, *i.e.*, $a,b,\\gamma, s'$. The first three variables are used to convert **pairwise** calculations into **instance-wise** optimization. While $s^{\\prime}$ is used to reformulate the partial ranking operation as a differentiable optimization problem. The detailed proofs are shown in **Appendix E.1**. Here, we only present a quick review of the key steps where the auxiliary variables are introduced into the formulation. ", " **(A3 Part 2)**: \n\nFor the original objective function, we have:\n\n$$\n\\begin{aligned}\n&\\underset{\\boldsymbol{x}, \\boldsymbol{x} ^{\\prime} \\sim \\mathcal{D} _{\\mathcal{P}}, \\mathcal{D} _{\\mathcal{N}}}{\\mathbb{E}}\\left[\\left(1-\\left(f(\\boldsymbol{x})-f\\left(\\boldsymbol{x} ^{\\prime}\\right)\\right)\\right) ^{2} \\mid f\\left(\\boldsymbol{x} ^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\right]\\\\\\\\\n=1 &+\n\\underbrace{\n \\mathbb{E} _{\\boldsymbol{x} \\sim \\mathcal{D} _{\\mathcal{P}}}\n \\left[f(\\boldsymbol{x}) ^{2}\\right]\n -\n \\big(\\mathbb{E} _{\\boldsymbol{x} \\sim \\mathcal{D} _{\\mathcal{P}}}\n [f(\\boldsymbol{x})]\\big) ^{2}\n} _{(1)}\\\\\\\\\n&+\n\\underbrace{\n \\mathbb{E} _{\\boldsymbol{x} ^{\\prime} \\sim \\mathcal{D} _{\\mathcal{N}}}\n \\left[f\\left(\\boldsymbol{x} ^{\\prime}\\right)^{2} \n \\mid f\\left(\\boldsymbol{x} ^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\right] \n -\n \\bigg(\n \\mathbb{E} _{\\boldsymbol{x}^{\\prime} \\sim \\mathcal{D} _{\\mathcal{N}}}\n \\Big[f\\left(\\boldsymbol{x} ^{\\prime}\\right)\n \\mid f\\left(\\boldsymbol{x} ^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\Big]\n \\bigg)^{2}\\\\\n} _{(2)}\\\\\\\\\n& +\n\\underbrace{\n\\bigg(\n\\mathbb{E} _{\\boldsymbol{x} \\sim \\mathcal{D} _{\\mathcal{P}}}\n\\Big[f(\\boldsymbol{x})\\Big]\n-\n\\mathbb{E} _{\\boldsymbol{x}^{\\prime} \\sim \\mathcal{D} _{\\mathcal{N}}}\n\\Big[f\\left(\\boldsymbol{x} ^{\\prime}\\right) \n\\mid f\\left(\\boldsymbol{x} ^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\Big]\n\\bigg)^{2} \n-2 \\mathbb{E} _{\\boldsymbol{x} \\sim \\mathcal{D} _{\\mathcal{P}}}\n\\Big[f(\\boldsymbol{x})\\Big]\n+2 \\mathbb{E} _{\\boldsymbol{x} ^{\\prime} \\sim \\mathcal{D} _{\\mathcal{N}}}\n\\Big[f\\left(\\boldsymbol{x} ^{\\prime}\\right) \n\\mid f\\left(\\boldsymbol{x} ^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\Big].\n} _{(3)}\n\\end{aligned}\n$$\n\nIt is easy to see that the pairwise form comes from $(1),(2),(3)$.\n\nFor $(1)$, according to the definition of the variance, we have:\n\n$$\n{\\underset{\\boldsymbol{x} \\sim \n\\mathcal{D} _{\\mathcal{P}}}{\\mathbb{E}}\n\\left[f(\\boldsymbol{x})^{2}\\right]-\n\\underset{\\boldsymbol{x} \n\\sim \\mathcal{D} _{\\mathcal{P}}}{\\mathbb{E}}\n[f(\\boldsymbol{x})]^{2}}=\\min _{a \\in[0,1]} \n\\underset{\\boldsymbol{x} \\sim \n\\mathcal{D} _{\\mathcal{P}}}{\\mathbb{E}}\n\\left[(f(\\boldsymbol{x})-a)^{2}\\right],\n$$\n\nwith the optimal $a$ being:\n\n$$\na^{*}=\\underset{\\boldsymbol{x} \n\\sim \\mathcal{D}_{\\mathcal{P}}}{\\mathbb{E}}[f(\\boldsymbol{x})].\n$$\n\nSimilarly, for $(2)$, we have:\n\n$$\n\\begin{array}{c}\n{\\underset{\\boldsymbol{x}^{\\prime} \n\\sim \n\\mathcal{D} _{\\mathcal{N}}}{\\mathbb{E}}\n\\left[f\\left(\\boldsymbol{x}^{\\prime}\\right)^{2} \n\\mid \nf\\left(\\boldsymbol{x}^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\right]-\n\\underset{\\boldsymbol{x}^{\\prime} \\sim \\mathcal{D} _{\\mathcal{N}}}\n{\\mathbb{E}}\\Big[f\\left(\\boldsymbol{x}^{\\prime}\\right) \\mid \nf\\left(\\boldsymbol{x}^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\Big]^{2}= }\\\\\n\\min _{b \\in[0,1]} \\underset{\\boldsymbol{x}^{\\prime} \\sim \\mathcal{D} _{\\mathcal{N}}}{\\mathbb{E}}\\left[\\left(f\\left(\\boldsymbol{x}^{\\prime}\\right)-b\\right)^{2} \\mid f\\left(\\boldsymbol{x}^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\right],\n\\end{array}\n$$\n\nwith the optimal $b$ being:\n\n$$\nb^{*}=\\underset{\\boldsymbol{x}^{\\prime} \\sim \\mathcal{D} _{\\mathcal{N}}}{\\mathbb{E}}\\left[f\\left(\\boldsymbol{x}^{\\prime}\\right) \\mid f\\left(\\boldsymbol{x}^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\right] .\n$$\n\nFor (3), we have:\n\n$$\n\\begin{array}{l}\n{\\left(\\underset{\\boldsymbol{x}^{\\prime} \\sim \\mathcal{D}\\_{\\mathcal{N}}}{\\mathbb{E}}\\left[f\\left(\\boldsymbol{x}^{\\prime}\\right) \\mid f\\left(\\boldsymbol{x}^{\\prime}\\right) \\geq \\eta\\_{\\beta}(f)\\right]-\\underset{\\boldsymbol{x} \\sim \\mathcal{D}\\_{\\mathcal{P}}}{\\mathbb{E}}[f(\\boldsymbol{x})]\\right)^{2} }=\\\\\\\\\n\\max _{\\gamma}\\left\\\\{2 \\gamma\\left(\\underset{\\boldsymbol{x}^{\\prime} \\sim \\mathcal{D} _{\\mathcal{N}}}{\\mathbb{E}}\\left[f\\left(\\boldsymbol{x}^{\\prime}\\right) \\mid f\\left(\\boldsymbol{x}^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\right]-\\underset{\\boldsymbol{x} \\sim \\mathcal{D} _{\\mathcal{P}}}{\\mathbb{E}}[f(\\boldsymbol{x})]\\right)-\\gamma^{2}\\right\\\\},\n\\end{array}\n$$\n\nwith the optimal solution being:\n\n$$\n\\gamma^{*}=\\underset{\\boldsymbol{x}^{\\prime} \\sim \\mathcal{D} _{\\mathcal{N}}}{\\mathbb{E}}\\left[f\\left(\\boldsymbol{x}^{\\prime}\\right) \\mid f\\left(\\boldsymbol{x}^{\\prime}\\right) \\geq \\eta _{\\beta}(f)\\right]-\\underset{\\boldsymbol{x} \\sim \\mathcal{D} _{\\mathcal{P}}}{\\mathbb{E}}[f(\\boldsymbol{x})].\n$$\n\nSo far, we can plug in the intermediate results to get an instance-wise mini-max reformulation. For more details, please refer to **Appendix E.1.1** to see the proof of Thm.1.\n\nBased on the reformulation, we can use the monotonicity constraint and the differentiable reformulation of the top-k-sum problem to introduce the variable $s^{\\prime}$ to deal with the constraint $f(\\boldsymbol{x}') \\ge \\eta_\\beta(f)$. Please see the proof of Prop.1 and Thm.2 in **Appendix E.1.2** for more details.\n\nIn the reformulation, $f$ is always fixed. So, we will have \n\n$$\n\\mathcal{R} _\\beta(f)=\\min _{a,b}\\max_\\gamma\\mathbb{E} _z[F _{op}(...)].\n$$", " > **(Q4)** In Line 115: Does $\\hat{\\mathbb{E}} _{\\boldsymbol{x}'\\sim\\mathcal{D} _{\\mathcal{N}}}$ mean empirical expectation over negative data points? If so, this seems inconsistent with the definition of $\\eta _{\\beta}(f)$ from before.\n\n**(A4)**: We are sorry for this typo. It should be $\\mathbb{E} _{\\boldsymbol{x}' \\sim \\mathcal{D} _{\\mathcal{N}}}$ in line 115.\n\n> **(Q5)** Again in Line 115: Isn't $\\mathbb{I}[f(x'\\geq\\eta _{\\beta})]$ always 0 or 1? How can it equal $\\beta$?\n\n**(A5)**: We are sorry for this typo. It should be\n\n$$\n\\eta _{\\beta}(f)=\\arg \\min _{\\eta _{\\beta} \\in \\mathbb{R}} \\Big[ \\mathbb{E} _{\\boldsymbol{x}^{\\prime} \\sim \\mathcal{D} \n _{\\mathcal{N}}}\\Big[\\mathbb{I}\\left[f\\left(\\boldsymbol{x}^{\\prime}\\right) \\geq \\eta _{\\beta}\\right]\\Big]=\\beta\\Big].\n$$\n\n> **(Q6)** In the equation block in the line 116: should it be $z_i$ instead of $z$ in the sum?\n\n**(A6)**: Yes, it should be replaced with $z_i$ here. Thank you so much for your suggestions!\n\n> **(Q7)** The claim of asymptotic unbiasedness seems to come from the point-wise convergence of the objective $G_{op}^{\\kappa}$ as $\\kappa \\to \\infty$. However, I'm a bit worried that point-wise convergence of the objective may not necessarily convergence of minimizer. It is especially worrying that $\\omega$ grows with $\\kappa$ in the training algorithm, so the regularization effect may dominate or cause some asymptotic bias. Can the claim of asymptotic unbiasedness be more fleshed out?\n\n**(A7 Part 1)**: Thank you so much for such an inspiring question! We will answer it with two sub-problems. We will attach the following arguments to the appendix in the next version.\n\n### (1)Uniform Convergence without regularization.\n\nThank you so much for posing this question. Note that the parameters $a,b,\\gamma,s'$ are chosen from tight feasible sets and that the scoring function outputs are assumed to be located in $[0,1]$. In this sense, we can also prove that a stronger convergence result holds without the regularization term:\n\n$$\n\\begin{aligned}\n&\\lim _{\\kappa \\rightarrow \\infty}\n\\bigg|\n\\min _{f,(a, b)\\in[0,1] ^2} \n\\max _{\\gamma\\in\\Omega _{\\gamma}} \n\\min _{s ^{\\prime}\\in\\Omega _{s'}} \n\\hat{\\mathbb{E}} _{\\boldsymbol{z} _{i} \\sim S}\\Big[\nG _{o p}^{\\kappa}\\left(f, a, b, \\gamma, \\boldsymbol{z} _{i}, s ^{\\prime}\\right)\n\\Big]\\\\\\\\\n&~~~~~~~~~~~~-\n\\min _{ f,(a, b)\\in[0,1] ^2} \n\\max _{\\gamma\\in\\Omega _{\\gamma}} \n\\min _{s ^{\\prime}\\in\\Omega _{s'}} \n\\hat{\\mathbb{E}} _{\\boldsymbol{z} _{i} \\sim S}\\Big[\nG _{o p}(f,a, b, \\gamma, \\boldsymbol{z} _{i}, s ^{\\prime})\\Big]\n\\bigg| \\\\\\\\\n&=0. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(*)\n\\end{aligned}\n$$\n\nWe prove Eq.$(*)$ in the following arguments:\n\n### PROOF:\n\nDenote:\n\n$$\n\\begin{aligned}\n\\Delta=&\\bigg|\n\\min _{f,(a, b)\\in[0,1] ^2} \n\\max _{\\gamma\\in\\Omega _{\\gamma}} \n\\min _{s ^{\\prime}\\in\\Omega _{s'}} \n\\hat{\\mathbb{E}} _{\\boldsymbol{z} _{i} \\sim S}\\Big[\nG _{o p} ^{\\kappa}\\left(f, a, b, \\gamma, \\boldsymbol{z} _{i}, s ^{\\prime}\\right)\n\\Big]\\\\\\\\\n&-\n\\min _{ f,(a, b)\\in[0,1] ^2} \n\\max _{\\gamma\\in\\Omega _{\\gamma}} \n\\min _{s ^{\\prime}\\in\\Omega _{s'}} \n\\hat{\\mathbb{E}} _{\\boldsymbol{z} _{i} \\sim S}\\Big[\nG _{o p}(f,a, b, \\gamma, \\boldsymbol{z} _{i}, s^{\\prime})\\Big]\n\\bigg|.\n\\end{aligned}\n$$\n\nFirst, we have:\n\n$$\n\\begin{aligned}\n& \\limsup _{\\kappa \\rightarrow +\\infty}\\Delta \\leq & \\underbrace{\n\\limsup _{\\kappa \\rightarrow +\\infty} \n\\sup _{f, (a, b)\\in[0,1]^2, \\gamma\\in \\Omega _{\\gamma}, s^{\\prime}\\in \\Omega _{s'}, \n\\boldsymbol{z}\\sim\\mathcal{D} _{\\mathcal{Z}}}\n\\left|\n\\frac{\\log(1+\\exp(\\kappa\\cdot g))}{\\kappa} - \n[g] _+\n\\right|} _{(a)}.\n\\end{aligned}\n$$\n\nwhere $g=(f(\\boldsymbol{x})-b)^2+2(1+\\gamma)f(\\boldsymbol{x})-s'$ and $[x]_+=\\max\\\\{x,0\\\\}$. Since $g \\in[-5,5]$ in the feasible set, we have:\n\n$$\n(a)\\le \\limsup _{\\kappa \\rightarrow +\\infty} \n\\sup _{x\\in[-5,5]}\n\\left|\n\\frac{\\log(1+\\exp(\\kappa\\cdot x))}{\\kappa} - \n[x] _+\n\\right|.\n$$\n\nNext, we prove that \n\n$$\n\\underset{\\kappa\\to\\infty}{\\limsup} \n\\underset{x \\in [-5,5]}{\\sup}\n\\left[\\left|\n\\frac{\\log(1+\\exp(\\kappa\\cdot x))}{\\kappa} - \n[x]_+\n\\right|\\right] \\le 0.\n$$\n\nFor the sake of simplicity, we denote:\n\n$$\n\\ell(x) = \\left|\n\\frac{\\log(1+\\exp(\\kappa\\cdot x))}{\\kappa} - \n[x]_+\n\\right|.\n$$\n\nWhen $x<0$, we have:\n\n$$\n\\ell(x)^\\prime = \\left(\\frac{\\log(1+\\exp(\\kappa \\cdot x))}{\\kappa}\\right)^\\prime \\ge 0.\n$$\n\nWhen $x>0$, we have:\n\n$$\n\\ell(x)'=\\left(\\frac{\\log(1+\\exp(\\kappa \\cdot x))}{\\kappa}-x\\right)^\\prime \\le 0.\n$$\n\nHence, the supermum must be attained at $x=0$. We have:\n\n$$\n(a)\\le \\limsup_{\\kappa\\rightarrow +\\infty} \\frac{\\log(2)}{\\kappa}= 0 .\n$$\n\nThe absolute value ensures that:\n\n$$\n\\liminf_{\\kappa\\rightarrow +\\infty}\\Delta\\ge 0.\n$$\n\nThe result follows from the fact:\n\n$$\n0 \\le \\liminf_{\\kappa \\rightarrow +\\infty} \\Delta \\le \\limsup_{\\kappa \\rightarrow +\\infty} \\Delta \\le 0.\n$$\n\n### PROOF COMPLETED\n\nMoreover, from the proof above, we also obtain a convergence rate:\n\n$$\n\\Delta = O(1/\\kappa).\n$$", " **(A7 Part 2)**: \n \n**(2) Convergence with the Regularization Term**\n\nThe regularization term is introduced to ensure that the objective is concave $\\textit{w.r.t.}$ $\\gamma$ when all the other variables are fixed. This is true when $\\kappa\\leq 2+ 2\\omega$ (Recall that $\\kappa$ is the hyperparameter of $r_\\kappa$, and $\\omega$ is the regularization weight in the loss function). In practice, we do not need a large $\\kappa$ to achieve unbiasedness. So the required $\\omega$ also has a limited magnitude. Next, we show this claim empirically. \n\n Since $g \\in [-5,5]$, we can check the uniform approximation ability of a given $\\kappa$ with the following error:\n\n$$\nerr(\\kappa) = \n\\int_{-5}^{5}\\left(\n\\frac{\\log (1+\\exp (\\kappa \\cdot g))}{\\kappa}-[g]_+\n\\right)^2dg.\n$$\n\nRecall that $\\kappa\\leq 2+ 2\\omega, 0 \\le \\omega$, given a choice of $\\kappa$, we can choose $\\omega$ as: \n\n$$\n\\omega=[\\kappa/2-1]_+.\n$$\n\nThe following table shows how $err(\\kappa), \\omega$ evolves when $\\kappa$ grows from 1 to 12. \n\n| $\\kappa$ | 1 | 2 | 3 | 4 | 5 | 6 |\n|:-------------|:-------- |:-------- | -------- |:-------- |:-------- |:-------- |\n| $\\omega$ | 0 | 0 | 0.5 | 1 | 1.5 | 2 |\n| $err(\\kappa)$ | 0.600983 | 0.075128 | 0.022260 | 0.009391 | 0.004808 | 0.002782 |\n\n| $\\kappa$ | 7 | 8 | 9 | 10 | 11 | 12 |\n|:-------------|:-------- |:-------- |:-------- |:-------- |:-------- |:-------- |\n| $\\omega$ | 2.5 | 3 | 3.5 | 4 | 4.5 | 5 |\n| $err(\\kappa)$ | 0.001752 | 0.001173 | 0.000824 | 0.000601 | 0.000451 | 0.000347 |\n\nIt is easy to see that having a $\\omega$ around 6 is sufficient to get a good approximation. Hence, the regularization term will not dominate the loss in most cases. \n\n> **(Q8)** As $G$ is a proxy function, are there conditions on the original problem that would make $G$ automatically satisfy Assumption 1?\n\n**(A8)**: Thank you so much for your question! First, this is a widely-used assumption for non-convex optimization literature such as Ref. [1-9].\n\nMoreover, some recent studies also verify the feasibility of the gradient Lipschitz assumption of deep learning models. For example, Ref.[9] states that the gradient Lipschitz assumption is satisfied if the following conditions hold:\n\n1. The neural network uses the smooth activation function ($\\textit{i.e.}$, Linear function, Logistic function, Softplus function, Tanh function).\n\n2. The output score is bounded with Sigmoid function $\\sigma(x)=1/(1+\\exp(-f(\\boldsymbol{x}))$ (the input will be scale into closed set [0,1]).\n\n3. The parameter of neural network $\\boldsymbol{w}$ is bounded ($\\textit{e.g.}$, $\\|\\boldsymbol{w}\\|^2\\leq M$).\n\nRef. [10] estimates the gradient Lipschitz constants of the deep neural network. They propose a general estimation for the upper bound on the Lipschitz constant of the gradient of any loss function for the parameters. The results show that such constants are finite under mild conditions. \n\n###### REFERENCE\n\n1. Guo, Zhishuai, et al. \"Randomized stochastic variance-reduced methods for multi-task stochastic bilevel optimization.\" *arXiv preprint arXiv:2105.02266* (2021). **[Assumption 1, 2]**\n\n2. Hu, Quanqi, Yongjian Zhong, and Tianbao Yang. \"Multi-block Min-max Bilevel Optimization with Applications in Multi-task Deep AUC Maximization.\" *arXiv preprint arXiv:2206.00260* (2022). **[Assumption 2.2]**\n\n3. Jiang, Wei, et al. \"Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization.\" *arXiv preprint arXiv:2207.08540* (2022). **[Assumption 1]**\n\n4. Qi, Qi, et al. \"Attentional biased stochastic gradient for imbalanced classification.\" *arXiv preprint arXiv:2012.06951* (2020). **[Assumption 1]**\n\n5. Wang, Bokun, and Tianbao Yang. \"Finite-Sum Coupled Compositional Stochastic Optimization: Theory and Applications.\" *International Conference on Machine Learning*. PMLR, 2022. **[Assumption 1]**\n\n6. Luo, Luo, et al. \"Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems.\" *Advances in Neural Information Processing Systems* 33 (2020): 20566-20577. **[Assumption 2]**\n\n7. Chen, Tianyi, Yuejiao Sun, and Wotao Yin. \"A single-timescale stochastic bilevel optimization method.\" *arXiv preprint arXiv:2102.04671* (2021). **[Assumption 1]**\n\n8. Gao, Hongchang, Bin Gu, and My T. Thai. \"Stochastic Bilevel Distributed Optimization over a Network.\" *arXiv preprint arXiv:2206.15025* (2022).**[Assumption 4]**\n\n9. Dixian Zhu, Gang Li, Bokun Wang, Xiaodong Wu, Tianbao Yang *Proceedings of the 39th International Conference on Machine Learning*, PMLR 162:27548-27573, 2022. **[Assumption 2]**\n\n10. Fazlyab, Mahyar, et al. \"Efficient and accurate estimation of lipschitz constants for deep neural networks.\" Advances in Neural Information Processing Systems 32 (2019).\n", " Thanks for your valuable suggestions! The replies to your concerns are attached below.\n\n> **(Q1)** The contribution of this paper \n\n**(A1)**: Thank you so much for your question! The key contribution is two-fold:\n\n1. We propose a new mini-max reformulation of the OPAUC (TPAUC) optimization problem. With this reformulation, we can find an efficient solution to an approximated problem with time complexity $O(\\epsilon^{-3})$ to find an $\\epsilon$-stationary point. Moreover, the approximated problem is shown to have asymptotic unbiasedness under certain conditions.\n\n2. Going beyond efficiency, the reformulation also provides a new path to derive generalization bounds on top of the mini-max problem. With this technique, we derive the upper bounds of the OPAUC (TPAUC) generalization error for **real-valued function classe**s (The following Existing studies can only deal with **0-1 classifier**). \n \n - Narasimhan, Harikrishna, and Shivani Agarwal. \"Support vector algorithms for optimizing the partial area under the ROC curve.\" Neural computation 29.7 (2017): 1919-1963.\n \n - Yang, Zhiyong, et al. \"When all we need is a piece of the pie: A generic framework for optimizing two-way partial auc.\" International Conference on Machine Learning. PMLR, 2021.\n \n Moreover, we can finish the analysis with much simpler steps since the interdependency of pairwise loss terms is eliminated in the instance-wise form. \n\n> **(Q2)** The difference from [39]\n\n**(A2)**: Thank you for your valuable comments. The difference is four-fold:\n\n1. We use different techniques from [39] to reformulate the original problem. [39] uses the DC technique to reformulate the objective as a difference of **pairwise** convex functions. In our paper, we reformulate the original problem as a mini-max problem of an **instance-wise** objective function. \n\n2. From the efficiency perspective, since we are dealing with instance-wise functions, our algorithm has an $O(n_++n_-)$ per-iteration complexity. Moreover, for the convergence rate, our algorithm enjoys an $O(\\epsilon^{-3})$ time complexity to achieve an $\\epsilon$-stationary point. By contrast, the DC algorithm in [39] has an $O(n_+n_-)$ per-iteration complexity, and an $O(\\epsilon^{-6})$ convergence rate. Hence, our algorithm is more efficient than DC. We agree that the comparison is slightly unfair due to **3** below. Hence, we will use performance comparison to compare them directly. \n\n3. Our algorithm is solving a smooth approximated problem while [39] is solving a non-smooth version of the original problem. As a result, both algorithms have approximation errors. For our algorithm, the error comes from the bias of the approximation. While for [39], the error comes from the suboptimal definition of the nearly $\\epsilon$-critical point. Moreover, we can give an error analysis for the approximation (**please see the response for R1 Q2 for details.**), while such a guarantee is a bit unclear for [39].\n\n4. Besides optimization, we can also use the reformulated problem to improve generalization analysis. **Please see the reply to your question Q1 for more details.**\n\n> **(Q3)** This paper solves the approximated problem, while [39] solves the original PAUC problem. The comparison of their convergence rates is not fair.\n\n**(A3)**: We agree with the reviewer that the convergence rate comparison is a bit unfair since the two methods are solving different problems. However, as shown in reply to your question Q2, since [39] is solving a non-smooth problem, it also induces approximation error. Specifically, it comes from using the nearly $\\epsilon$-stationary point instead of the exact $\\epsilon$-stationary point. So we will compare them directly according to their performance. See the reply to the question below.\n\n> **(Q4)** It would be more convincing to see some numerical result comparison with [39]\n\n**(A4):** \n Since [39] is designed for OPAUC, we conduct the comparison on all datasets in terms of OPAUC. The results show that our algorithm can achieve better performance. \nOPAUC ($\\mathrm{FPR}\\leq0.3$)\n\n| | cifar-10-1 | cifar-10-2 | cifar-10-3 | cifar-100-1 | cifar-100-2 | cifar-100-3 | tiny-im-1 | tiny-im-2 | tiny-im-3 |\n|:-----:|:----------:|:----------:|:----------:|:-----------:|:-----------:|:-----------:| :---------:|:---------:|:---------:|\n| PAUCI | 0.7713 | 0.9718 | 0.7736 | 0.9199 | 0.9885 | 0.8538 | 0.8223 | 0.9124 | 0.9141 |\n| DCA | 0.7526 | 0.9615 | 0.7497 | 0.9105 | 0.9814 | 0.8406 |0.8135 | 0.9081 | 0.9057 |\n", " > **(Q5)** Assumption 1 requires Lipschitz continuous gradients for every data, which is a very strong assumption, and $L_G$ can be infinity in this assumption. Is it possible to use a weaker assumption (e.g., the gradient expectation is Lipschitz continuous) and get the same convergence rate?\n\n**(A5)**: Thank you for your valuable comments! For our case, the assumption is equivalent to assuming that the scoring function has Lipschitz continuous gradients for every data. Such assumptions are widely used in recent studies about non-convex optimizations, see Ref.[1-9].\n\nMoreover, some recent studies have started discussing the condition under which the assumption holds. For example, Ref.[9] states that the gradient Lipschitz assumption for scoring function assumption is satisfied if the following conditions hold:\n\n1. The neural network uses the smooth activation function ($\\textit{i.e.}$, Linear function, Logistic function, Softplus function, Tanh function).\n\n2. The output score is bounded with Sigmoid function $\\sigma(x)=1/(1+\\exp(-f(\\boldsymbol{x}))$ (the input will be scaled into closed set $[0,1]$).\n\n3. The parameter of neural network $\\boldsymbol{w}$ is bounded ($\\textit{e.g.}$, $\\|\\boldsymbol{w}\\|^2\\leq M$).\n\nRef. [10] estimates the gradient Lipschitz constants of the output of the deep neural network. They propose a general estimation for the upper bound on the Lipschitz constant of the gradient of any loss function to the parameters. The results show that such constants are finite under mild conditions.\n\n##### REFERENCE\n\n1. Guo, Zhishuai, et al. \"Randomized stochastic variance-reduced methods for multi-task stochastic bilevel optimization.\" *arXiv preprint arXiv:2105.02266* (2021). **[Assumption 1, 2]**\n\n2. Hu, Quanqi, Yongjian Zhong, and Tianbao Yang. \"Multi-block Min-max Bilevel Optimization with Applications in Multi-task Deep AUC Maximization.\" *arXiv preprint arXiv:2206.00260* (2022). **[Assumption 2.2]**\n\n3. Jiang, Wei, et al. \"Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization.\" *arXiv preprint arXiv:2207.08540* (2022). **[Assumption 1]**\n\n4. Qi, Qi, et al. \"Attentional biased stochastic gradient for imbalanced classification.\" *arXiv preprint arXiv:2012.06951* (2020). **[Assumption 1]**\n\n5. Wang, Bokun, and Tianbao Yang. \"Finite-Sum Coupled Compositional Stochastic Optimization: Theory and Applications.\" *International Conference on Machine Learning*. PMLR, 2022. **[Assumption 1]**\n\n6. Luo, Luo, et al. \"Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems.\" *Advances in Neural Information Processing Systems* 33 (2020): 20566-20577. **[Assumption 2]**\n\n7. Chen, Tianyi, Yuejiao Sun, and Wotao Yin. \"A single-timescale stochastic bilevel optimization method.\" *arXiv preprint arXiv:2102.04671* (2021). **[Assumption 1]**\n\n8. Gao, Hongchang, Bin Gu, and My T. Thai. \"Stochastic Bilevel Distributed Optimization over a Network.\" *arXiv preprint arXiv:2206.15025* (2022).**[Assumption 4]**\n\n9. Zhu, Dixian, et al. \"When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee.\" arXiv preprint arXiv:2203.00176 (2022). **[Assumption 2]**\n\n10. Herrera, Calypso, Florian Krach, and Josef Teichmann. \"Estimating full lipschitz constants of deep neural networks.\" arXiv preprint arXiv:2004.13135 (2020).\n\n> **(Q6)** Can this formulation conversion be used to optimize PAUC in a range of FPR/TPR?\n\n**(A6):** We can use the sum-of-ranged loss proposed in the following work to get a similar reformulation. However, due to the space and time limitation, we will leave it as future work. \n\n- Hu, Shu, Yiming Ying, and Siwei Lyu. \"Learning by minimizing the sum of ranked range.\" Advances in Neural Information Processing Systems 33 (2020): 21013-21023.", " Thanks for your valuable suggestions! The replies to your concerns are attached below.\n\n> **(Q1)** There seem to have typos in the proof. For example, in line 529, in the decomposition of the conditional risk, I think $\\ell$ should be replaced with $\\ell_{0-1}$. The same problem exists in line 578.\n\n**(A1):** We are sorry for the typos. We will correct them in the next version. Thank you so much for your cross-check!\n\n> **(Q2)** In Figures 4-5, I can only see the efficiency improvement in the number of iterations. But the authors also claimed that the reformulation could improve the per-iteration efficiency, which I do agree with. I think it may be better if they could give some empirical comparison in terms of this. How will the reformulation improve the per-iteration complexity? Show it with experiments.\n\n**(A2):** Thank you for your suggestion! We conduct some experiments for per-iteration complexity with a fixed epoch with varying $n_+, n_-$. All experiments are conducted on an Ubuntu 16.04.1 server with an Intel(R) Xeon(R) Silver 4110 CPU. For every method, we repeat running 10000 times and record the average running time. We only record the loss calculation time and use the python package `time.time()` to calculate the running time. Methods with * stand for the pair-wise estimator, while methods with ** stand for the instance-wise estimator. Here is the result of the experiment. We see the acceleration is significant when the data is large. \n\nOPAUC ($\\mathrm{FPR}\\leq0.3$)\n\n| unit:ms | $\\begin{aligned}n_+=64\\\\\\\\n_-=64\\end{aligned}$ | $\\begin{aligned}n_+=128\\\\\\\\n_-=128\\end{aligned}$ | $\\begin{aligned}n_+=256\\\\\\\\n_-=256\\end{aligned}$ | $\\begin{aligned}n_+=512\\\\\\\\n_-=512\\end{aligned}$ | $\\begin{aligned}n_+=1024\\\\\\\\n_-=1024\\end{aligned}$ | $\\begin{aligned}n_+=2048\\\\\\\\n_-=2048\\end{aligned}$ |\n|:---------:|:----------------:|:------------------:|:------------------:|:------------------:|:--------------------:|:--------------------:|\n| SOPA* | 0.075 | 0.205 | 1.427 | 5.053 | 20.132 | 86.779 |\n| SOPA-S* | 0.063 | 0.165 | 0.946 | 4.003 | 15.815 | 62.031 |\n| AGD-SBCD* | 0.061 | 0.145 | 1.040 | 3.413 | 13.273 | 54.954 |\n| AUC-poly* | 0.062 | 0.178 | 1.086 | 3.553 | 14.266 | 56.637 |\n| AUC-exp* | 0.063 | 0.182 | 0.985 | 3.513 | 14.155 | 55.689 |\n| MB* | 0.121 | 0.174 | 0.468 | 1.713 | 6.393 | 25.663 |\n| PAUCI** | 0.026 | 0.029 | 0.033 | 0.043 | 0.072 | 0.107 |\n| AUC-M** | 0.025 | 0.028 | 0.031 | 0.040 | 0.059 | 0.104 |\n| CE** | 0.018 | 0.020 | 0.026 | 0.036 | 0.055 | 0.096 |\n\nTPAUC ($\\mathrm{FPR}\\leq0.5,\\mathrm{TPR}\\geq0.5$)\n\n| unit:ms | $\\begin{aligned}n_+=64\\\\\\\\n_-=64\\end{aligned}$ | $\\begin{aligned}n_+=128\\\\\\\\n_-=128\\end{aligned}$ | $\\begin{aligned}n_+=256\\\\\\\\n_-=256\\end{aligned}$ | $\\begin{aligned}n_+=512\\\\\\\\n_-=512\\end{aligned}$ | $\\begin{aligned}n_+=1024\\\\\\\\n_-=1024\\end{aligned}$ | $\\begin{aligned}n_+=2048\\\\\\\\n_-=2048\\end{aligned}$ |\n|:---------:|:----------------:|:------------------:|:------------------:|:------------------:|:--------------------:|:--------------------:|\n| SOPA* | 0.079 | 0.206 | 1.439 | 5.197 | 20.556 | 88.314 |\n| SOPA* | 0.065 | 0.153 | 0.947 | 3.940 | 15.388 | 62.541 |\n| AUC-poly* | 0.062 | 0.180 | 1.175 | 3.573 | 14.440 | 56.469 |\n| AUC-exp* | 0.059 | 0.206 | 1.154 | 3.558 | 14.080 | 56.566 |\n| MB* | 0.173 | 0.198 | 0.491 | 1.955 | 6.554 | 29.369 |\n| PAUCI** | 0.030 | 0.030 | 0.038 | 0.045 | 0.071 | 0.109 |\n| AUC-M** | 0.025 | 0.027 | 0.033 | 0.043 | 0.059 | 0.104 |\n| CE** | 0.018 | 0.021 | 0.026 | 0.037 | 0.0535 | 0.096 |\n", " > **(Q3)** In Sec. 5, the generalization analysis is based on the fact that the (empirical) risk is proportional to the minimax (empirical) risk. I’m wondering if the final result is also proportional to the excess risk $R- \\hat{R}$.\n\n**(A3)**: Taking OPAUC as an instance, the (empirical) risk is also proportional to the minimax (empirical) risk with coefficient $\\beta$. To see this, we have: \n\n$$\n\\mathcal{R}_{\\beta}(f) = \\beta \\cdot \\min _{(a, b) \\in[0,1]^{2}} \\max _{\\gamma \\in \\Omega _{\\gamma}} \\min _{s^{\\prime} \\in \\Omega _{s^{\\prime}}} \\underset{z \\sim \\mathcal{D} _{\\mathcal{Z}}}{\\mathbb{E}}\\left[G _{o p}\\left(f, a, b, \\gamma, \\boldsymbol{z}, s^{\\prime}\\right)\\right],\n$$\n\nand \n\n$$\n\\hat{\\mathcal{R}} _{\\beta}(f) = \n\\beta\\cdot \n\\min _{(a, b) \\in[0,1]^{2}} \n\\max _{\\gamma \\in \\Omega _{\\gamma} } \n\\min _{s^{\\prime} \\in \\Omega _{s^{\\prime}}}\n\\underset{{z \\sim \\mathcal{D} _{\\mathcal{Z}}}}{\\hat{\\mathbb{E}}}\\left[G _{o p}\\left(f, a, b, \\gamma, \\boldsymbol{z}, s^{\\prime}\\right)\\right].\n$$\n\nThen we get:\n\n$$\n\\begin{aligned}\n\\mathcal{R} _{\\beta}(f)-\\hat{\\mathcal{R}} _{\\beta}(f)=\n\\beta\\left(\\min _{(a, b) \\in[0,1]^{2}} \\max _{\\gamma \\in \\Omega _{\\gamma}} \\min _{s^{\\prime} \\in \\Omega _{s^{\\prime}}} \\underset{z \\sim \\mathcal{D} _{\\mathcal{Z}}}{\\mathbb{E}}\\left[G _{o p}\\left(f, a, b, \\gamma, \\boldsymbol{z}, s^{\\prime}\\right)\\right]\\right.\\\\\\\\\n\\left.-\\min _{(a, b) \\in[0,1]^{2}} \n\\max _{\\gamma \\in \\Omega _{\\gamma} } \n\\min _{s^{\\prime} \\in \\Omega _{s^{\\prime}}}\n\\underset{{z \\sim \\mathcal{D} _{\\mathcal{Z}}}}{\\hat{\\mathbb{E}}}\\left[G _{o p}\\left(f, a, b, \\gamma, \\boldsymbol{z}, s^{\\prime}\\right)\\right])\\right).\n\\end{aligned}\n$$\n\nSo the final result is also proportional to the excess risk with coefficient $\\beta$. The result is similar to TPAUC. \n\n> **(Q4)** The batch size in this paper is set to 1024, which is quite large. I’m wondering how the authors implement this in their experiments.\n\n**(A4)**: We use `torch.nn.DataParallel` to implement on four RTX 3090 GPUs.\n\n> **(Q5)** It is necessary to state some analysis on the explicit upper bounds for the empirical Rademacher complexity, e.g., after Theorem 4.\n\n**(A5)**: Thank you so much for your suggestion! Under a standard setting, the upper bound for the empirical Rademacher complexities $ \\hat{\\mathfrak{R}} _+, \\hat{\\mathfrak{R}} _- $ are of order $O((1/n _+) ^{-1/2}), O((1/n _-) ^{-1/2})$, respectively. This is true for many hypothesis classes, including linear models and deep neural networks (see the references below). So the overall sample complexity should be $O((1/n _+) ^{-1/2}), O((1/n _-) ^{-1/2})$. We will include this discussion after Thm.4. \n\n- Schapire, Robert E., and Yoav Freund. \"Foundations of machine learning.\" (2012): 23-52.\n\n- Golowich N, Rakhlin A, Shamir O. Size-independent sample complexity of neural networks, COLT 2018: 297-299.\n", " Thanks for your valuable suggestions! The replies to your concerns are attached below.\n\n> **(Q1)** The math is dense even in the main paper. Though I can understand most of the details, I think the authors can add more details and intuitive content to guide readers unfamiliar with AUC.\n\n**(A1)**: Thank you for your great suggestions! We will include more details about math and AUC. In addition, more intuitions about the theorems and lemmas will be introduced in Introduction and Preliminaries.\n\n> **(Q2)** I only see the performance comparisons in the main paper. I think efficiency is more important in this paper since the goal is to accelerate. So, I would also like to the running time comparisons in the experiments.\n\n**(A2)**: Thank you for your suggestion! We conduct some experiments for per-iteration complexity with a fixed epoch with varying $n_+, n_-$. All experiments are conducted on an Ubuntu 16.04.1 server with an Intel(R) Xeon(R) Silver 4110 CPU (to get rid of the affect of parallel computing). For every method, we repeat running 10,000 times and record the average running time. We only record the loss calculation time and use the python package `time.time()` to calculate the running time. Methods with * stand for the pair-wise estimator, while methods with ** stand for the instance-wise estimator. Here is the result of the experiment. We see the acceleration is significant when the data is large. \n\n\nOPAUC ($\\mathrm{FPR}\\leq0.3$)\n\n\n| unit:ms | $\\begin{aligned}n_+=64\\\\\\\\n_-=64\\end{aligned}$ | $\\begin{aligned}n_+=128\\\\\\\\n_-=128\\end{aligned}$ | $\\begin{aligned}n_+=256\\\\\\\\n_-=256\\end{aligned}$ | $\\begin{aligned}n_+=512\\\\\\\\n_-=512\\end{aligned}$ | $\\begin{aligned}n_+=1024\\\\\\\\n_-=1024\\end{aligned}$ | $\\begin{aligned}n_+=2048\\\\\\\\n_-=2048\\end{aligned}$ |\n|:---------:|:----------------:|:------------------:|:------------------:|:------------------:|:--------------------:|:--------------------:|\n| SOPA* | 0.075 | 0.205 | 1.427 | 5.053 | 20.132 | 86.779 |\n| SOPA-S* | 0.063 | 0.165 | 0.946 | 4.003 | 15.815 | 62.031 |\n| AGD-SBCD* | 0.061 | 0.145 | 1.040 | 3.413 | 13.273 | 54.954 |\n| AUC-poly* | 0.062 | 0.178 | 1.086 | 3.553 | 14.266 | 56.637 |\n| AUC-exp* | 0.063 | 0.182 | 0.985 | 3.513 | 14.155 | 55.689 |\n| MB* | 0.121 | 0.174 | 0.468 | 1.713 | 6.393 | 25.663 |\n| PAUCI** | 0.026 | 0.029 | 0.033 | 0.043 | 0.072 | 0.107 |\n| AUC-M** | 0.025 | 0.028 | 0.031 | 0.040 | 0.059 | 0.104 |\n| CE** | 0.018 | 0.020 | 0.026 | 0.036 | 0.055 | 0.096 |\n\nTPAUC ($\\mathrm{FPR}\\leq0.5,\\mathrm{TPR}\\geq0.5$)\n\n| unit:ms | $\\begin{aligned}n_+=64\\\\\\\\n_-=64\\end{aligned}$ | $\\begin{aligned}n_+=128\\\\\\\\n_-=128\\end{aligned}$ | $\\begin{aligned}n_+=256\\\\\\\\n_-=256\\end{aligned}$ | $\\begin{aligned}n_+=512\\\\\\\\n_-=512\\end{aligned}$ | $\\begin{aligned}n_+=1024\\\\\\\\n_-=1024\\end{aligned}$ | $\\begin{aligned}n_+=2048\\\\\\\\n_-=2048\\end{aligned}$ |\n|:---------:|:----------------:|:------------------:|:------------------:|:------------------:|:--------------------:|:--------------------:|\n| SOPA* | 0.079 | 0.206 | 1.439 | 5.197 | 20.556 | 88.314 |\n| SOPA-S* | 0.065 | 0.153 | 0.947 | 3.940 | 15.388 | 62.541 |\n| AUC-poly* | 0.062 | 0.180 | 1.175 | 3.573 | 14.440 | 56.469 |\n| AUC-exp* | 0.059 | 0.206 | 1.154 | 3.558 | 14.080 | 56.566 |\n| MB* | 0.173 | 0.198 | 0.491 | 1.955 | 6.554 | 29.369 |\n| PAUCI** | 0.030 | 0.030 | 0.038 | 0.045 | 0.071 | 0.109 |\n| AUC-M** | 0.025 | 0.027 | 0.033 | 0.043 | 0.059 | 0.104 |\n| CE** | 0.018 | 0.021 | 0.026 | 0.037 | 0.0535 | 0.096 |\n", " In this paper, they introduce a new method to optimize OPAUC and TPAUC. After presenting its derivation, they provide an implementation with SGD and analyze its convergence rate and generalization. Finally, they provide experiments comparing their new method to pre-existing. Strengths:\n* Cool reformulation ideas\n* Decent experiments showing the strengths of their method\n\nWeaknesses:\n* Unclear or sloppy notation (see questions 1, 4, 5, 6) 1. In the Notation section, is $\\mathcal{D}_Z$ understood to be a mixture of $\\mathcal{D}_P \\times \\{1\\}$ and $\\mathcal{D}_N \\times \\{0\\}$? If so, at what mixture proportion?\n2. In line 100, it is said that ``Note that (3) and (5) are hard to optimize since it is complicated to determine 100 the positive quantile $\\eta_\\alpha(f)$ and the negative quantile $\\eta_\\beta(f)$.'' This may be a silly question, but can't you just sort to find the largest $n^\\alpha_+$ and $n^\\beta_-$ scores? This would be like $O(n\\log n)$ time, which wouldn't be much worse than the $O(n)$ time.\n3. Can some intuition be provided behind $F_{op}$; i.e., what roles do all the introduced variables play? Does the equivalence of (9) hold for each $f$? That is, is it true that $\\mathcal{R}\\_\\beta(f) = \\min\\_{a,b} \\max\\_{\\gamma} \\mathbb{E}\\_z[F\\_{op}(\\dots)]$?\n4. In Line 115: Does $\\hat{\\mathbb{E}}_{x' \\sim \\mathcal{D}\\_\\mathcal{N}}$ mean empirical expectation over negative data points? If so, this seems inconsistent with the definition of $\\eta_\\beta(f)$ from before.\n5. Again in Line 115: Isn't $\\mathbb{I}[f(x') \\geq \\eta_\\beta]$ always 0 or 1? How can it equal $\\beta$?\n6. In the equation block in line 116: should it be $z_i$ instead of $z$ in the sum?\n7. The claim of asymptotic unbiasedness seems to come from the pointwise convergence of the objective $G_{op}^\\kappa$ as $\\kappa \\rightarrow \\infty$. However, I'm a bit worried that pointwise convergence of the objective may not necessarily entail convergence of the minimizer. It is especially worrying that $\\omega$ grows with $\\kappa$ in the training algorithm, so the regularization effect may dominate or cause some asymptotic bias. Can the claim of asymptotic unbiasedness be more fleshed out?\n8. As $G$ is a proxy function, are there conditions on the original problem that would make $G$ automatically satisfy Assumption 1? Main limitations are discussed in the questions section (in particular, questions 3 and 7).", " The paper proposes a nonconvex strongly concave min-max formulation for OPAUC and TPAUC maximization and employs a stochastic min-max algorithm with $O(\\epsilon^{-3})$ complexity. Strengths:\n1. The paper is well-organized and written clearly.\n2. The formulation conversion of PAUC is novel.\n\nWeaknesses:\n1. The paper employs an algorithm with a very strong (bad) assumption. $L_G$ in Assumption 1 can be infinity.\n2. Contribution is not significant enough.\n\n 1. In Table 1, the proposed algorithm is compared with [39], but difference between the paper and [39] is not mentioned in Introduction part. This paper approximates the PAUC as a non-convex strongly concave problem and then solves the approximated problem, while [39] solves the original PAUC problem. Therefore, the comparison of their convergence rates is not fair enough. Moreover, it would be more convincing to see some numerical results comparison with [39].\n2. The Assumption 1 requires G has Lipschitz continuous gradients for any data, which is a very strong assumption and $L_G$ can be infinity in this assumption. Is it possible to use a weaker assumption (e.g. the expectation of gradient is Lipschitz continuous) and get the same convergence rate?\n3. Can this formulation conversion be used to optimize PAUC in a range of FPR/TPR? Please see above.", " This paper proposes novel algorithms to improve the efficiency of partial AUC Optimization. Specifically, they present a reformulation scheme to transform the pairwise indifferentiable objective function into an instance-wise differentiable with an approximation scheme. Moreover, they provide generalization and optimization guarantees for their proposed method. The extensive experiment in this paper shows that the proposed method can outperform the state-of-art most times. Pros:\n\nThis paper presents an efficient reformulation scheme to make a complicated problem much more practical to solve. In other words, both the number of epochs and the per-iteration running time could be reduced significantly. \n\nThis proposed method also has a strong and comprehensive theoretical guarantee in terms of convergence and generalization. Moreover, technical details are non-trivial. I believe these merits can benefit the audience from a broad range of the ML community.\n\nThe experiments are extensive. Most of the competitors are quite SOTA.\n\nThe paper presents a solid work with the possibility to be employed in real-world problems. I only have some minor concerns, which I hope can be addressed during the rebuttal.\n\nCons:\n\nThe math is dense even in the main paper. Though I can understand most of the details, I think the authors can add more details and intuitive content to guide readers unfamiliar with AUC.\n\nI only see the performance comparisons in the main paper. I think efficiency is more important in this paper since the goal is to accelerate. So, I would also like to the running time comparisons in the experiments. \nAll my concerns are shown in the previous question. So, I will only list some suggestions.\n\n1.The methodology could be polished to be more friendly for the general readers.\n2. Since this paper is not the first PAUC optimization method. The related work should be moved to the main paper. This would be helpful for the readers to get the key contribution of the paper.\n3. There are some typos to be corrected, for example:\n(1) In the keywords, please change “min max” to “mini-max”\n(2) Line 168, “this problem。” should be “these problems.”\n(3)Line 125-129 is a bit redundant. Please rephrase it.\n YES", " This paper focuses on optimizing the One-way (Two-way) Partial AUC metric, which is challenging since a ranking constraint is involved in the objective function. Interestingly, this paper presents a simple instance-wise reformulation of the original objective, which is unbiased in an asymptotic sense. It turns out that the complicated problem could be solved with an accelerated minimax optimization problem. Moreover, the convergence rate can thus be improved. Empirically, the experiments also show its superiority in most cases. Strength:\n\n1) The reformulation of the original problem is impressive to me, where the ranking constraints are canceled by conditional expectation and a differentiable reformulation of the top-k (bottom-k) ranking. \n\n2) The generalization analysis is interesting, where the minimax reformulation can also simplify the derivation of uniform convergence bounds. Moreover, the differentiable formulation also allows the analysis to deal with real-valued hypothesis classes, which previous works often fail to do.\n\n3) Though the convergence analysis is an existing result. It is also good to see that the convergence rate could decrease to O(T^{-3}) due to the reformulation.\n\nWeakness:\n\n1) It seems that there are some typos in the proof. For example, in line 529, in the decomposition of the conditional risk, I think $\\ell$ should be replaced with $\\ell_{0-1}$. The same problem exists in line 578. \n\n2) In Figures 4-5, I can only see the efficiency improvement in the number of iterations. But the authors also claimed that the reformulation could improve the per-iteration efficiency, which I do agree. I think it may be better if they could give some empirical comparison in terms of this.\n\n\n 1) How will the reformulation improve the per-iteration complexity? Show it with experiments.\n\n2) In Sec. 5, the generalization analysis is based on the fact that the (empirical) risk is proportional to the minimax (empirical) risk. I’m wondering if the final result is also proportional to the excess risk R- \\hat{R}.\n\n3) The batch size in this paper is set to 1024, which is quite large. I’m wondering how the authors implement this in their experiments. \n\n4) It is necessary to state some analysis on the explicit upper bounds for the empirical Rademacher complexity, e.g., after Theorem 4. All my concerns are presented in “Weakness” and “Questions”. This paper focuses on designing an efficient and asymptotically unbiased algorithm for PAUC, which seems no pontential negative social impact. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 5, 5 ]
[ "jYA0GZmRPOZ", "zx9ApGRjs_", "E6gzBn1k0oz", "UvDhsX55nbV", "nips_2022_er4GR0wHWQO", "p0NKdSs56V", "bYm0TjxljJe", "bYm0TjxljJe", "bYm0TjxljJe", "bYm0TjxljJe", "bYm0TjxljJe", "-HnuACj0lr3", "RWM3108R5SF", "TnDol5yBhgaJ", "k92K3JuWulV", "k92K3JuWulV", "k92K3JuWulV", "k92K3JuWulV", "PSe3kYoKbZz", "PSe3kYoKbZz", "9_QOmOPnbwW", "9_QOmOPnbwW", "TG5-uSKkMzQ", "nips_2022_er4GR0wHWQO", "nips_2022_er4GR0wHWQO", "nips_2022_er4GR0wHWQO", "nips_2022_er4GR0wHWQO" ]
nips_2022_Lpla1jmJkW
Constants of motion network
The beauty of physics is that there is usually a conserved quantity in an always-changing system, known as the constant of motion. Finding the constant of motion is important in understanding the dynamics of the system, but typically requires mathematical proficiency and manual analytical work. In this paper, we present a neural network that can simultaneously learn the dynamics of the system and the constants of motion from data. By exploiting the discovered constants of motion, it can produce better predictions on dynamics and can work on a wider range of systems than Hamiltonian-based neural networks. In addition, the training progresses of our method can be used as an indication of the number of constants of motion in a system which could be useful in studying a novel physical system.
Accept
2 of the 3 reviewers highly appreciated the rebuttal and are now recommending the paper for acceptance without any reservations. The 3rd, most critical reviewer, FmUK did unfortunately not react. The new experiments "learning from pixels" nicely addresses the reviewer's concern about having to carefully choose the system state. Also the concern about the number of constants of motion is well addressed. The question about the sensitivity to noise could have been stronger: FmUK was talking about physical systems that typically have accelerations as part of their states, but typical sensors only measure positions/angles (which typically already produce slightly noisy measurements). Applying numerical differentiation twice to get to the accelerations often results in very noisy measurements. Hence the question how representative the experiments (where the states - that also include $\dot{x}$ - are assumed to be measured perfectly) are for real systems. I think this is still an interesting point to discuss - but no deal-breaker for me.
val
[ "YDUrRXUyf9J", "rWxz53oNOMm", "WALXgxhJibJ", "iOUdWe3rR5l", "W0kJl_1RGl", "koMUAhowAP", "MopwIkWwmeS", "8uL7eZFYbra" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their reviews and for increasing the score.", " I thank the authors for their informative replies. Ideally I would have liked some more analysis of the QR part, but the paper is otherwise well written, the technical approach appears well founded, and it improves upon relevant baselines. ", " Thank you for your detailed review.\n\n> The assumed number of constants of motion appears to have a significant effect (Fig. 5). How were these selected in the benchmark examples?\n\nThe cases in section 4 are well-known physical systems where the number of constants of motion is known.\nFor cases where the number of constants of motion is unknown, one can follow the procedure in section 6 to get the maximum number of constants of motion where COMET still works well.\nIt can also be combined with other method to approximate the number of constants of motion, e.g. [Chen, *et al.*, 2022](https://doi.org/10.1038/s43588-022-00281-6), followed by the procedure in section 6 to find the number of constants of motion faster.\nOnce the maximum number of constants of motion is found, one should use that in training COMET to provide better long-term predictions.\n\n> It is a bit concerning that it degrades for 50 hidden neurons on the 2D non-linear spring system, there are still thousands of parameters in that network. Any idea why? Could this have something to do with biases induced by the QR decomposition? It is not obvious to me what kind of biases that will induce.\n\nOur first guess is because it does not have sufficient expression capability to express the constants of motion.\nOne of the constants of motion in the 2D non-linear spring system is the energy where it's proportional to $|\\mathbf{r}|^4$ which might be hard to represent with smaller neural networks.\nAnother thing is that training with QR tends to repel the gradients of constants of motion, $\\nabla \\mathbf{c}$, to be linearly independent to each other (it induces large gradient when the gradients are almost linearly dependent).\nThis might make the training path more complicated to reach the correct constants of motion.\nWe can't say for sure what's the problem, but those are our hypothesis.\nProving it right or wrong and finding the right explanation is currently out of the scope of this paper and we will leave it out as a future work.\n\n> As an engineer, I would have liked to see some example on how this approach can be used on some more complex real-world-like sysid problem.\n\nThank you for the pointer. We would also be interested to see this, however, as the time is limited, we currently cannot present it in the first round of rebuttal.\nIf the referee could give some reference for the interesting sysid problem that is easy to setup, we would greatly appreciate it.\n\n**Minor presentation issues:**\n\n> \"grad\" -> writing out \"gradient\" in text would read better, or typeset it as a math operator (or nabla)\n\nThank you, we've changed it in the revised version.\n\n> l282: works similar/is comparable? to\n\nWith no constant of motion, the dynamics is just the direct output of the neural network without any postprocessing (i.e. QR in COMET).\nSo we say \"works similarly\" instead of \"is comparable\".\n\n> l290: which field, or just on science? l290: more points\n\nThank you, we've changed it to \"deep learning on physical sciences\", as it is the field that we're comfortable mentioning.\n", " Thank you very much for your thorough review.\n\n> (1) **Choosing the system state.** In classical mechanics the state of the system and its derived quantities are fundamental quantities with specific properties. The choice of the state isn't trivial and it requires an analytical understanding of the system itself. In the proposed paper, authors assume that the state is given and often coincides with the one used in classical mechanics. However, in applications relevant to machine learning (as the one proposed), assuming that the state is given is quite a restrictive assumption.\n\nOne of the main point that we want to demonstrate in the paper is that COMET has less restrictions in terms of choosing the states compared to other methods, such as LNN and HNN.\nWe showed this in the pendulum cases by choosing $x, y, \\dot{x}, \\dot{y}$ as the state variables instead of the usual $\\theta, \\dot{\\theta}$ for HNN and LNN.\nChoosing 4 variables instead of 2 also shows that COMET can work with redundant state variables.\nWe also showed this in the Lotka-Volterra case where we choose the simplest variables, not special canonical coordinates (i.e. the ones required by HNN), and not states that consists of $q,\\dot{q}$ (i.e. the ones required by LNN).\nHowever, by having only a few cases for the illustration, we can see why this point is missed.\n\nTo strengthen this point, we have added a new experiment in learning the two-body dynamics via images (2nd part in section 7 in the revised version).\nIn this case, the images are encoded to latent state variables, and then decoded back to the images (i.e. auto-encoder).\nThe dynamics of the latent state variables (which are learned and not given explicitly in the dataset) were learned succesfully by COMET.\nIt was done even without additional constraints on the latent state variables (unlike HNN, for example).\nThe success of this experiment shows that COMET can work even if the states are not given explicitly.\n\n> **(2) Choosing the system state dimensionality and number of constraints.** Section 6 proposes an heuristic for finding the number of constraints of motion. The problem is quite fundamental and it deserves a more thorough investigation. Authors could try to get inspiration from solutions to a similar problem in the scope of system identification and model selection.\n\nThe main purpose of this paper is to learn the dynamics more accurately by exploiting the presence of constants of motion.\nIt is a proposed improvement to HNN, LNN, NSF, as well as other works in terms of generalizability.\nFinding the number of constants of motion is also an interesting research field, but this is not the main focus of our paper.\nWe present it as a means of maximizing the benefit of COMET for unknown systems by setting the number of constants of motion as close as possible to the true number.\nIt can, of course, be combined with other methods (such as [Chen, *et al.*, 2022](https://doi.org/10.1038/s43588-022-00281-6)) to approximate the number of constants of motion, followed by the procedure from section 6 to speed up the process.\n\n**Questions**\n\n> How robust is the proposed methodology to the choice of the system state?\n\nCOMET does not need a special set of state coordinates, unlike HNN (that requires canonical coordinates) and LNN (that requires the states to be consisted of $q,\\dot{q}$).\nAs long as the chosen state is sufficient to determine the dynamics, it can be used in COMET.\nTo strenghten our point, we have added a new experiment in the revised version about learning the dynamics of a system in the latent space of an auto-encoder (following the reviewer's recommendation) without additional constraints on the latent variables.\nThis shows that COMET is robust with the choice of the system state.\n\n> How robust is the proposed methodology to noise on the system derivative? \n\nThe noise affects the accuracy of the trained model, but it does not affect much of the stability of the predicted trajectory as COMET tries to conserve a preset number of constants of motion.\nFor example, adding noise to the two body case would make the orbit's frequency prediction a bit off, but the predicted trajectory will still be similar to the true trajectory (e.g. see Figure 3).\nOn the other hand, methods such as HNN, LNN, and NSF only conserves one constant of motion.\nHence, adding noise to those method affects the stability of the predicted trajectory more severely than COMET (e.g. Figure 3).\n\n> What is the role of $\\dot{s_0}(s)$ at test time?\n\n$\\dot{s_0}(s)$ is one of the direct outputs of the neural network besides $c(s)$, while $\\dot{s}(s)$ is not the direct output of the NN.\n$\\dot{s_0}(s)$ is used as the initial guess of the directions where it will be orthogonalized w.r.t. $\\nabla c$ with QR decomposition to get $\\dot{s}(s)$.\nIn (4), the term $|\\dot{s_0}-\\hat{\\dot{s}}|^2$ is to make the initial guess as close as possible to the true value, thus makes the training faster.\n", " Thank you for your elaborate review.\n\n> Most of the experiments are conducted in relatively simple dynamical systems.\n\nWe presented systems that readers can easily understand so we can focus on demonstrating the wide applicability of COMET.\nOne of the case we use in the paper is the two-body system where it has 8 states and section 7 contains an example of a PDE where it has infinite degrees of freedom.\nHowever, in order to include more complicated examples, we have added a new experiment in section 7 in the revised version about learning the dynamics of latent variables of an auto-encoder (\"Learning from pixels\").\n\n> It will be interesting to compare the method with Lagrangian and Hamiltonian based methods in higher dimensional mechanical systems (such as robots). I suspect that LNN may outperform the proposed method in some of these cases.\n\nWe agree that LNN might outperform COMET in some cases. This is also apparent in our results in Table 1.\nOur hypothesis is that because LNN explicitly assumes that half of the states are the time derivative of the other half of the states, it contains the second-order bias that is shown by [Gruver, *et al.*, 2022](https://arxiv.org/pdf/2202.04836.pdf) to improve results.\nOn the other hand, the assumption makes LNN does not work on cases where the assumption is invalid (e.g. KdV and Lotka-Volterra).\nMoreover, in some cases, we found that the dynamics learned by LNN is highly stiff that makes it cannot be integrated.\n\n> In table 1, LNN is shown to not work with dissipative systems, which I am not convinced of. It will be surprising that a Lagrangian based method cannot handle dissipations.\n\nThe dynamics of LNN (i.e. equation 6 of [Cranmer, *et al.*, 2020](https://arxiv.org/pdf/2003.04630.pdf)) is derived by assuming the Lagrangian does not depend explicitly on the time $t$.\nHowever, for dissipative systems, the Lagrangian has an explicit dependence on time (e.g. see [Kobe, *et al.*, 1986](https://doi.org/10.1119/1.14840)) which violates the LNN assumption.\n\n> The proposed method, COMET, performs significantly worse than other baselines such as LNN in the 2D pendulum task, and even worse than NODE, which is surprising–According to the authors, if the system does not have any symmetry it will be the same as NODE. \n\nLNN performs better than COMET in those cases because it has the second-order bias as mentioned in our response above.\nHowever in some cases, LNN fails to integrate as it produces dynamics that are too stiff.\n\nWhen compared to NODE, the median error produced by COMET is slightly higher than NODE only in 1 out of 6 tested cases in section 4.\nHowever, the upper bound error of COMET is consistently lower than NODE in all of the tested cases, and sometimes it is significantly lower.\n\n**Questions answered**\n\n> 1, Explain a bit more on why QR decomposition is chosen as the way to orthogonalization. What other methods are available? Why this particular one (i.e. differentiability, computation simplicity, etc)?\n\nThere are several algorithms to perform orthogonalization, such as: (1) Gram-Schmidt (GS) decomposition, (2) Householder (HH) transformation, or (3) Givens rotation (GR).\nGS is well-known for its severe numerical instability.\nHH provides a much better numerical stability and is also usually implemented in the modern QR decomposition, makes it easy to use.\nThis is what we use.\nAnother one, GR, provides an advantage of being easily parallelized. However, at this stage, parallelization on orthogonalization is not our top priority.\n\n> 2, Explain a bit more on the \"stiffness\" of the dynamics. Why COMET, LNN, NST can produce stiffer dynamics?\n\nWe could not say for sure the reason of the stiffness of the methods, especially for LNN and NSF, but we have a hypothesis for COMET.\n\nStiffness usually happens when some of the eigenvalues of Jacobian $\\partial\\dot{\\mathbf{s}}/\\partial\\mathbf{s}$ are very high.\nAs COMET uses QR decomposition, the Jacobian matrix depends on gradient of the QR, which depends on the inverse of $\\mathbf{R}$ matrix (i.e. one of the QR outputs).\nIf the matrix $\\mathbf{R}$ is nearly singular, it will provide high gradient values that could make the dynamics stiff.\nThe matrix $\\mathbf{R}$ can be nearly singular if the training produces $\\nabla c$ that are nearly linearly-dependent to each other.\nThis case, although rare, it is possible to happen.\n\n> 3, Figure 5 shows that COMET works even without all COMs included. Can the authors add an ablation study to show how well COMET performs with a partial set of COMs?\n\nWe have added a simple ablation study on comparing the predictions of COMET with varying number of constants of motion (currently only 1, 2, and the full constants of motion, but hopefully we can add more).\nWe see that by increasing the number of constants of motion, the upper bound of the predictions error decreases, which means that it can keep the stability of the trajectory better as the number of constants of motion increases.\n", " In this paper, the authors proposed a novel method to solve dynamical systems through data. Instead of formulating the system Lagrangians or Hamiltonians, the method leverages the symmetry and conservation laws, thus does not assume any specific dynamical system forms. The proposed network predicts both the guess of the generalized velocity and the set of constants of the motions (COMs). Then, by exploring the orthogonality between the gradients of COMs and the generalized velocity, the final system velocity is projected through a QR decomposition. The authors show that their method performs equally well or better than many baselines in many simulated dynamical systems. \n The strengths of the paper:\n\nThe method is quite novel yet simple, i.e. utilizes the system’s symmetry structure and conservation laws to better learn the system dynamics. \n\nThe proposed method can handle many different types of dynamical systems, including Hamiltonian systems, dissipative systems, and even infinite dimension PDEs. \n\nThe method outperforms many baselines in a wide range of tasks, and the generated trajectory has higher quality in some tasks (i.e. two body systems). \n\n\nThe method works well even with a partial list of constants of motions, as shown in Figure 5. \n\n\nThe weakness of the paper:\n\nMost of the experiments are conducted in relatively simple dynamical systems. It will be interesting to compare the method with Lagrangian and Hamiltonian based methods in higher dimensional mechanical systems (such as robots). I suspect that LNN may outperform the proposed method in some of these cases. \n\nIn table 1, LNN is shown to not work with dissipative systems, which I am not convinced of. It will be surprising that a Lagrangian based method cannot handle dissipations.\n\nThe proposed method, COMET, performs significantly worse than other baselines such as LNN in the 2D pendulum task, and even worse than NODE, which is surprising–According to the authors, if the system does not have any symmetry it will be the same as NODE. Besides the main comment, I have a few questions and suggestions for the authors:\n\n1, Explain a bit more on why QR decomposition is chosen as the way to orthogonalization. What other methods are available? Why this particular one (i.e. differentiability, computation simplicity, etc)? \n2, Explain a bit more on the “stiffness” of the dynamics. Why COMET, LNN, NST can produce stiffer dynamics?\n\n3, Figure 5 shows that COMET works even without all COMs included. Can the authors add an ablation study to show how well COMET performs with a partial set of COMs? N/A", " The paper is well written and easy to follow. It describes a methodology to learn a system of differential equations from data. The methodology can be used to capture the evolution of a given multi-dimensional time series s(t); possibly the time series can be influenced by an exogenous input (e.g. a force acting on the system). The working hypothesis is that the provided time series is the result of a constrained differential equation, i.e. there exists a function c(.) such that c(s) is constant on the trajectory s(t). \n\nThe proposed approach is tested on motion data (i.e. frictionless mass-spring, 2D pendulum, 2D damped pendulum, two body interactions, 2D nonlinear spring, Lotka-Volterra dynamics). Results show that the proposed method performs better than previous approaches (i.e. NODE, HNN, NSF, LNN) in terms of mean squared error on prediction. Some heuristics for choosing the number of constraints acting on the system are proposed; generalization to systems with infinite number of states is also proposed. The paper is well written and easy to read. The structure of the paper is sound and results have been presented with enough details to understand the main point of the submitted paper. The major but fundamental weakness of the paper is that it focuses on relatively simple and artificial examples (e.g. mass-spring, pendulum) which aren't of practical use but merely useful for assessing the soundness of the approach. Even though I can see the value of these examples when explaining Lagrangian or Hamiltonian physics (where we need to understand how the different analytical components interact), I see very limited value in using these artificial examples when explaining how to use neural-networks to approximate a given time series. By focusing on this artificial examples, authors fall short in addressing fundamental questions listed below.\n\n(1) ***Choosing the system state***. In classical mechanics (e.g. Newton-Euler, Legrangian, Hamiltonian) the state of the system and its derived quantities (e.g. its derivative, the momentum) are fundamental quantities with specific properties. The choice of the state isn't trivial and it requires an analytical understanding of the system itself. In the proposed paper, authors assume that the state is given and often coincides with the one used in classical mechanics. However, in applications relevant to machine learning (as the one proposed), assuming that the state is given is quite a restrictive assumption. \n\n(2) ***Choosing the system state dimensionality and number of constraints***. Section 6 proposes an heuristic for finding the number of constraints of motion. The problem is quite fundamental and it deserves a more thorough investigation. Authors could try to get inspiration from solutions to a similar problem in the scope of system identification and model selection (e.g. MDL, minimum description language for model selection; AIC, Akaike information criteria). \n - ***How robust is the proposed methodology to the choice of the system state?*** Do the authors have evidence that the proposed approach could potentially work replacing the given system state with an embedding (such as the latent of a VAEs) learnt from observations which are connected to the state but possibly higher dimensional (e.g. an image of the pendulum)?\n\n- ***How robust is the proposed methodology to noise on the system derivative?*** The system state is always composed of the system generalized coordinates and their derivatives. As a result the system state derivative often contains the generalized coordinates accelerations (i.e. second order derivatives) which are extremely noisy when measured on real systems. Authors mentioned the states rate change was subject to a Gaussian noise with standard deviation of 0.05. Can authors elaborate on how this is representative of the typical noise observed in real systems when second order derivatives are approximated (e.g. finite differences)? \n\n- ***What is the role of $s_0(s)$ at test time?*** Looking at (4) and (2), it seems that the only role of $s_0$ is (at training time) to guide $s$ to $\\hat{s}$ and to be sure that at convergence $\\dot{s} \\in \\langle \\nabla c_1, \\dots, \\nabla c_n \\rangle^\\perp$. After training it is unclear whether the function $s_0$ is needed. Can author describe better the need to explicitly learn the function $s_0(s)$? Not applicable. ", " In the context of learning models for dynamical systems, the authors propose to learn the constants of motion of the system by means of a neural mapping from states. The method is validated on several (albeit small) simulated dynamical systems, where it appears to improve upon previous approaches. This paper was mostly clearly written (if occasionally abstract) and enjoyable to read. While mechanics is not my area of expertise, it proposes what seems like a novel idea for modelling of dynamical systems (learning constants of motion). It also empirically improves upon the results of the famous neural ODE paper and later developments. On the other hand, the evaluation is entirely on what seems like quite simple examples. \n\nConcerns:\n-----------------\nPresentation: \n- Some intuition of how the learning interacts with the integration scheme would have been helpful (c.f. NODE paper). It is quite abstract. \n\nExperiments: \n- The assumed number of constants of motion appears to have a significant effect (Fig. 5). How were these selected in the benchmark examples?\n\n- It is a bit concerning that it degrades for 50 hidden neurons on the 2D non-linear spring system, there are still thousands of parameters in that network. Any idea why? Could this have something to do with biases induced by the QR decomposition? It is not obvious to me what kind of biases that will induce.\n\n- As an engineer, I would have liked to see some example on how this approach can be used on some more complex real-world-like sysid problem.\n\nMinor presentation issues:\n\n\"grad\" -> writing out \"gradient\" in text would read better, or typeset it as a math operator (or nabla)\n\nl282: works similar/is comparable? to\n\nl290: which field, or just on science?\nl290: more point*s* \n\n The assumed number of constants of motion (n_c) appears to have a significant effect (Fig. 5). Maybe I missed it, but I couldn't find how these were selected in the benchmark examples?\n\nIt is a bit concerning that it degrades for 50 hidden neurons on the 2D non-linear spring system, with three layers there are still thousands of parameters in that network. Any idea why? Could this have something to do with biases induced by the QR decomposition? It is not obvious to me what kind of biases that will induce.\n Mostly adequate." ]
[ -1, -1, -1, -1, -1, 7, 3, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "rWxz53oNOMm", "WALXgxhJibJ", "8uL7eZFYbra", "MopwIkWwmeS", "koMUAhowAP", "nips_2022_Lpla1jmJkW", "nips_2022_Lpla1jmJkW", "nips_2022_Lpla1jmJkW" ]
nips_2022_gbXqMdxsZIP
OTKGE: Multi-modal Knowledge Graph Embeddings via Optimal Transport
Multi-modal knowledge graph embeddings (KGE) have caught more and more attention in learning representations of entities and relations for link prediction tasks. Different from previous uni-modal KGE approaches, multi-modal KGE can leverage expressive knowledge from a wealth of modalities (image, text, etc.), leading to more comprehensive representations of real-world entities. However, the critical challenge along this course lies in that the multi-modal embedding spaces are usually heterogeneous. In this sense, direct fusion will destroy the inherent spatial structure of different modal embeddings. To overcome this challenge, we revisit multi-modal KGE from a distributional alignment perspective and propose optimal transport knowledge graph embeddings (OTKGE). Specifically, we model the multi-modal fusion procedure as a transport plan moving different modal embeddings to a unified space by minimizing the Wasserstein distance between multi-modal distributions. Theoretically, we show that by minimizing the Wasserstein distance between the individual modalities and the unified embedding space, the final results are guaranteed to maintain consistency and comprehensiveness. Moreover, experimental results on well-established multi-modal knowledge graph completion benchmarks show that our OTKGE achieves state-of-the-art performance.
Accept
This paper presents a method to learn multi-modal knowledge graph embeddings. To integrate the embeddings from different modalities, which is a difficult task because of the heterogeneity across the different modalities, the paper presents an optimal transport based method to learn multi-modal embeddings. The paper received positive reviews from all the reviewers. The authors submitted a rebuttal to answer the questions from the reviewers, and the reviewers seem to be satisfied. Given the unanimously positive reviews and my own reading of the paper, I vote for the acceptance of the paper.
train
[ "mkcGDxBssw", "WcZQDd4CBgb", "M_5874oCZoB", "nMvuqaY44HU", "sgzI8B8IUl3", "e1VKGCggCE7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank you for taking the time to carefully read our submission and providing such detailed suggestions. Our responses are as follows.\n\n__Q1:__ In Equation (4), is the symbol $E$ represent the structural embedding? Equation ($4$) is supposed to be further explained. What is the insight behind the multi-modal fusion based on the Wasserstein barycenter?\n\n__A1:__ The symbol $E$ represents the fusion embeddings, $E_1$ , $E_2$ and $E_3$ represent the structural embedding, the transported linguistic embedding, and transported visual embedding. In this sense, we have obtained structural embedding, visual embedding, and linguistic embedding, then Equation (4) shows that the fusion embedding is the closest point to them under the Wasserstein distance metric. In this way, the fusion embedding is the Wasserstein barycenter of structural, visual, and linguistic embeddings. \n\n__Q2:__ In the experiment, the author should compare the proposed fusion method with other multi-modal fusion methods based on the aligned multi-modal embeddings. \n\n__A2:__ Thank you for your considerable advice. We conduct the experiment to compare the proposed fusion method with other multi-modal fusion methods. We use the fusion method in [1] [2] and [3] to replace the OT fusion in OTKGE, and denote them as Model1, Model2, and Model3. We conduct the experiments in FBIMG datasets as shown in the following table. We can observe that the OT fusion can realize the best performance, the reason may lie in that the OT fusion in the OTKGE can better capture the embedded geometric information and overcome the heterogeneity of different embedded spaces.\n\n| Model |MRR | Hit@1| Hit@3 | Hit@10|\n| :-----------: |:---: |:-----------: | :---:|:---:|\n| Model1 |.595 |.578 | .606 | .670|\n| Model2 |.594|.576 |.605|.668 |\n| Mode3|.596 |.581 |.609| .671|\n| OTKGE |.601|.583 |.615|.673 |\n\n[1] P. Lu, H. Li, W. Zhang, J. Wang, and X. Wang, “Co-attending free- form regions and detections with multi-modal multiplicative feature embedding for visual question answering,” in Proc. AAAI, 2018.\n\n[2] H. Ben-Younes, R. Cadene, N. Thome, and M. Cord, “BLOCK: Bilinear super diagonal fusion for visual question answering and visual relationship detection,” in Proc. AAAI, 2019.\n\n[3] J.-M. Pe ́rez-Ru ́a, V. Vielzeuf, S. Pateux, M. Baccouche, and F. Jurie, “MFAS: Multimodal fusion architecture search,” in Proc. CVPR, 2019.\n\n__Q3:__ The idea of aligning the different modalities is not novel.\n\n__A3:__ We are so sorry for the unclear presentation of the novelty. Our main novelty could be briefly rephrased as follows. To leverage a wealth of multi-modal knowledge and learn more realistic representations of real-world entities, previous work usually neglects the heterogeneity of distributions in the multi-modal fusion. It will do harm to the interaction of multi-modal knowledge. We propose to transfer the multi-modal information to a unified space by optimal transport and fuse the multi-modal information with the Wasserstein barycenter. It can effectively tackle the modal spatial heterogeneity by reducing the Wasserstein distance between different modal distributions.", " Thank you so much for your acceptance as well as your constructive comments. The responses to your concerns are as follows.\n\n__Q1:__ Why OTKGE outperforms other baselines in the uni-modal setting? Is that attributed to the design of relation $r$?\n\n__A1:__ Relations play important roles in capturing the semantics in KGs and the ability to model relation $r$ can affect the performance of the model to a considerable extent. \n+ (1) Recall that there are rich complex relations between entities in some uni-modal KGE datasets such as FB15k-237, which embraces up to 237 kinds of relations. In this sense, it poses a challenge for learning semantic relations in knowledge graphs. To tackle this issue, the relation $r$ we designed servers the ability for OTKGE to model many key patterns, e.g., symmetry, anti-symmetry, inversion, composition, transitivity, hierarchy, intersection, and mutual exclusion patterns. In this way, it can handle these complex relationships above-mentioned in KGs and effectively capture latent semantics between entities. \n+ (2) To study the role of relation $r$ played in uni-modal KGs, we conduct the experiments for uni-modal KG datasets. As shown in the following table, we denote the version of OTKGE which remove the transformation (Eq.(1)) as OTKGE w/o trans. One can observe that the performance of OTKGE w/o trans is reduced. Especially, the performance drop of OTKGE on FB15k-237 is more obvious, which shows the effectiveness of dealing with the complex relations with the design of the relation $r$. \n\n| Model |Dataset |MRR | Hit@1| Hit@3 | Hit@10|\n| :-----------: |:---: |:-----------: | :---:|:---:|:---:|\n| OTKGE | WN18RR |.495 |.449 | .508 | .571|\n| OTKGE w/o trans| WN18RR |.488|.441 |.495|.565 |\n| OTKGE| FB15k-237 |.371 |.276 |.410| .560|\n| OTKGE w/o trans| FB15k-237 |.357|.264 |.391|.542 |\n+ (3) As for Table 3 in the paper, one can notice that OTKGE w/o trans has minimal performance degradation in multi-modal relations. The reason lies in that the multi-modal data provide rich auxiliary information, which yields gains for modeling complex relationships. Under such circumstances, we can observe that the design of $r$ has limited gain in model performance.", " Thank you so much for your acceptance as well as your constructive comments. The responses to your concerns are as follows.\n\n__Q1:__ Will the relationship in Eq. $12$ always hold? If not, what are the possible cases that violate Eq.$12$?\n\n__A1:__ Generally speaking, the relationship in Eq.12 always holds empirically. However, there are also some cases that violate. For instance, if the information of the modality is inaccurate or noisy, the quality of entity representation learning will be affected or even reduced after adding this modality. In the circumstances, Eq.12 is not held. How to improve the quality of multimodal information or remove noise is also a direction we are interested in for future work.\n\n__Q2:__ Could you provide some experimental results regarding the argument that the impact of the intrinsic complexity of function classes will be reduced when the sample size $m$ is larger?\n\n__A2:__ Thank you for your useful advice. To be specific, the impact of the intrinsic complexity of function classes will be reduced when the sample size $m$ is larger, which means the performance difference between multi-modal KGE and uni-modal KGE will be also reduced. Based on this knowledge, we conduct the experiment with FBIMG datasets to demonstrate this argument. We denote FB%x as the dataset version that removes x percent of the data in the train datasets of FBIMG. Then the experimental result is shown as the following table. We can observe that as the number of samples increases, the performance difference between single-mode and multi-mode decreases.\n|Dataset|Modal |MRR | Hit@1| Hit@3 | Hit@10|\n|:---: |:--:|:-----------: | :---:|:---:|:---:|\n| FB%30 |uni-modal |.553 |.529 | .567 | .631|\n| FB%30 |multi-modal |.575 |.553 | .579 | .655|\n| FB%20 | uni-modal |.582|.558 |.586|.657 |\n| FB%20 | multi-modal |.593|.567 |.595|.671 |\n| FB%10 | uni-modal |.596 |.578 |.602|.665|\n| FB%10| multi-modal |.601|.583 |.615|.673 |", " This paper proposes optimal transport (OT) for the multi-modal fusion procedure of multi-modal KGE. It further provides theoretical analysis on target errors in OT fusion and generalization bounds for multi-modal KGE. Strengths\n1. For previous multi-modal KGE methods, the embeddings from different modals are in various heterogeneous spaces, which is hard to avoid by using direct fusion. This paper proposes to mitigate the gap between heterogeneous spaces by using optimal transport (OT). The results of the ablation study clearly show the advantage of using OT for fusion over direction fusion counterparts like average or concatenation for fusion. Moreover, OTKGE can capture a wider range of Inference patterns than previous works.\n2. Theoretical analysis of two important questions is given. Theorem 1 shows the target error can be bounded by the Wasserstein distances of different modals. Theorem 2 shows the generalization bound for the latent representation quality, and it further implies that multi-modal KGE can outperform uni-modal KGE analytically. In addition, the guarantee of Theorem 1 can not be easily achieved by other methods due to their fusion approaches. Theorem 2 justifies why multi-modal KGE is better. I think these theoretical results well distinguish the proposed methods from previous methods.\n3. The experimental results are comprehensive, and the ablation study aligns with theoretical analysis.\n\nWeaknesses\n1. The organization of the theoretical analysis can be improved. It's better to split section 4 into two subsections. In addition, it would be better if the authors could state the relationships between Theorem 1 and Theorem 2 (if there are any). In addition, algorithm 1 and algorithm 2 can be moved into the main text instead of the appendix.\n 1. Will the relationship in Eq. 12 always hold? If not, what are the possible cases that violate Eq.12?\n2. In Line 245-246, the authors said that \"The impact of the intrinsic complexity of function classes will be reduced when the sample size m is larger\", I think this result is quite interesting. Could you provide some experimental results regarding this argument? \n The limitations of this paper are properly discussed.", " The paper studies the multi-modal knowledge graph embedding, which leverages textual, visual, and structure knowledge for learning more accurate representations of entities. The authors propose to utilize the optimal transport to transport different embedding distributions into a unified embedding space, then design a fusion strategy to obtain the final entity embeddings. The authors also provide theoretical analysis to support their claims. Strengths:\n- The work is well motivated. Multi-modal data have heterogeneous spaces. How to fuse their representations is an interesting problem.\n- It is novel to leverage the optimal transport to transfer different embedding distributions to a unified space.\n- Theoretical analysis is provided to support the central claims.\n- The paper is well written and easy to follow.\n\n\nWeaknesses:\n- Experimental results need more discussion and explanations.\n - I wonder why OTKGE outperforms other baselines in the uni-modal setting. Is that attributed to the design of relation r? However, we can see from Table 3 that OTKGE w/o trans has minimal performance degradation. It would be better if the authors could provide more detailed ablation studies on uni-model datasets. Yes", " This paper focuses on the multi-modal fusion problem. The authors propose a method to align the different modalities by reducing the Wasserstein distance between different modal distributions. Experimental results show that the proposed method achieves state-of-the-art performance. Strengths:\n1 The paper is well written and easy to follow. \n2 The motivation is clear.\n3 The experiments and ablation studies well demonstrate the effectiveness of the proposed method.\nWeaknesses:\n1 The idea of aligning the different modalities is not novel.\n2 Equation 4 is supposed to be further explained. What’s the insight behind the multi-modal fusion based on the Wasserstein barycenter?\n 1 In Equation (4), the symbol E represents the structural embedding? The author should clarify.\n2 In the experiment, the author should compare the proposed fusion method with other multi-modal fusion methods based on the aligned multi-modal embeddings.\n Yes" ]
[ -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, 4, 3, 4 ]
[ "e1VKGCggCE7", "sgzI8B8IUl3", "nMvuqaY44HU", "nips_2022_gbXqMdxsZIP", "nips_2022_gbXqMdxsZIP", "nips_2022_gbXqMdxsZIP" ]
nips_2022_m6DJxSuKuqF
Keypoint-Guided Optimal Transport with Applications in Heterogeneous Domain Adaptation
Existing Optimal Transport (OT) methods mainly derive the optimal transport plan/matching under the criterion of transport cost/distance minimization, which may cause incorrect matching in some cases. In many applications, annotating a few matched keypoints across domains is reasonable or even effortless in annotation burden. It is valuable to investigate how to leverage the annotated keypoints to guide the correct matching in OT. In this paper, we propose a novel KeyPoint-Guided model by ReLation preservation (KPG-RL) that searches for the matching guided by the keypoints in OT. To impose the keypoints in OT, first, we propose a mask-based constraint of the transport plan that preserves the matching of keypoint pairs. Second, we propose to preserve the relation of each data point to the keypoints to guide the matching. The proposed KPG-RL model can be solved by the Sinkhorn's algorithm and is applicable even when distributions are supported in different spaces. We further utilize the relation preservation constraint in the Kantorovich Problem and Gromov-Wasserstein model to impose the guidance of keypoints in them. Meanwhile, the proposed KPG-RL model is extended to partial OT setting. As an application, we apply the proposed KPG-RL model to the heterogeneous domain adaptation. Experiments verified the effectiveness of the KPG-RL model.
Accept
In this paper the authors propose a novel Optimal Transport problem that uses a small number of annotated keypoints in both source and target domain to encode additional information and guide the OT plan in the problem. The authors propose a variant of the sinkhorn algorithm to solve the problem and show that it can be used to solve OT across different spaces with also an extension to Partial OT setting. Numerical experiments show the interest of the method on a difficult heterogeneous domain adaptation problem. The contribution was appreciated and all reviewers agree about the novelty of the method and the interest of the new model in practical applications. All experiments (in the paper, appendix and the new ones in reply) show that the method work better than existing approaches in semi-supervised HDA. Some concerns were raised by reviewers: missing discussion and citation of Masked OT and other approaches such as Fused GW but those were mostly addressed in the replies. The question of the choice of d was also well answered with new experiments. The consensus between reviewers was that the replies were great and that the paper should be accepted an NeurIPS. Nevertheless it was very clear from the discussion that the new results, discussion and positioning wrt the state of the art MUST be included in the final version and its supplementary. The paper is also lacking a discussion about how to obtain keypoints pairs in practice (other than using existing labels) which is very important to ensure that the method can be used in practice.
train
[ "Ll8y6xJnRu6", "-DpwAcTxmBL", "zhldE7vuPVYi", "mYEOe1R7G8N", "iHpMbSpLpd", "rZZVsw-bXt3", "OKXG7aBUr_6_", "Gl_CwY4t6F6", "JCV8ulaa1P", "l5A_8GUNCfA", "D2D35e030vQ", "-EbvV_YxfTy", "96Z5_0HBj_U", "u8KLagAnNFa", "ZYOi3Hts8f", "4bDQY_hKtV1", "ah8H7dJXHF4", "1eU1h41eyI", "WtTO81gFyZ", "57aGvEl2wyP", "vvrcEPf0Bs", "eOO2QON-zu6", "li4O7btuyS" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the comments again. We will include the discussions on the related work, the experimental details, and the additional experiments in the final version if accepted. Regarding to Q3, built upon the softmax, the relation scores in Eqs. (7) and (8) model the \"relation\" of each point to the keypoints. Based on the softmax-based formulation, the relation scores in Eqs. (7) and (8) rely on the relative rather than the absolute magnitude of distances, improving their robustness to the scale of distances. This clarification will be included in the paper.\n\nOur codes can reproduce the results in the paper. The codes contain the extracted features on Office-31 dataset for HDA experiments. We will clean and release the codes with extracted features on the google drive and GitHub.", " Thanks for the comments again. The impact of the location of the source keypoints has been studied in response to Q2. We next report the results for varying numbers of source keypoints in Table R2-4. \nTable R2-4. Results for varying numbers of source keypoints in the HDA experiment on Office-31.\n\n| Number | 3 | 5 | 7 | 9 | All |\n| -------- | ---- | ---- | ---- | ---- | ---- |\n| Accuracy | 78.4 | 79.2 | 79.6 | 79.8 | 79.9 |\n\nIn this experiment, we randomly sample 3/5/7/9 samples (keypoints) or use all the source samples (keypoints) for each class in the source domain to compute the source class centers, which are paired with labeled target samples for constructing the keypoint pairs. The results in Table R2-4 show that as the number of source keypoints increases, the accuracy gradually increases. The best result is obtained when all source samples are used to compute the class centers.\n\nAs suggested, the impact of the number and location of source keypoints, and the clearer comparison with other SOTA methods will be included in Section 5.2 of the final paper. ", " I thank the authors for their response.\n\nAfter reading their rebuttal, I appreciate that the authors have addressed all of my questions. The authors are encouraged to include the discussion about related work, the experimental details, and additional experiments in the final version of the paper to improve the quality and clarity of the paper.\n\nMy only remaining concern is that the answer for Q3 is not completely satisfactory. The authors tried to expand what was written in L170-172 and Figure 3, which I understood. From my point of view, Eq. (7) and Eq. (8) strongly resemble the formula of softmax with temperature.\n\n*Additional note*: The source code can be zipped and included in the supplementary materials or uploaded anonymously to some websites such as [Anonymous GitHub](https://anonymous.4open.science/).\n\nAll in all, I would like to increase my score from 4 to 6.", " First, I would like to thank the authors for their detailed response and the additional figures that have been reported. \nThe impact of the choice of the keypoints (number+location) should be clarified on the final version of the paper, together with a better positionning with the sota (see comments of the other reviewers).\nThat being said, I am happy to raise my score to 6.", " I thank the authors for their response. \n\nI appreciate the effort. \nAs said before, I am happy to increase the score to 7.", " Thanks for the comments again. We investigate the effect of $\\alpha$ in the following Table R4-5. \n\nTable R4-5. Results of KPG-RL-GW with varying values of $\\alpha$ in the HDA task A$\\rightarrow$W.\n\n| $\\alpha$ | 0.9 | 0.8 | 0.7 | 0.6 | 0.5 | 0.4 | 0.3 | 0.2 | 0.1 |\n| -------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n| Accuracy | 74.3 | 78.1 | 81.5 | 82.9 | 84.2 | 84.5 | 84.0 | 84.0 | 83.7 |\n\nFrom Table R4-5, it can be observed that the best value of $\\alpha$ is 0.4 in this task, and the results are relatively stable when $\\alpha$ ranges in [0.2, 0.5]. We will include this experiment in Appendix B.", " I thank the authors for their response. \n\nThe authors have well addressed all of my questions. As long as the authors include all additional clarification and details in the final version, I believe this paper will be a very good and solid work.\n\nI only have one minor concern, where I believe the parameter $\\alpha$ should also be tuned, rather than being fixed. Nevertherless, the current performance of KPG is already very competitive, so the tuning might not help much.\n\nHowever, this minor concern does not prevent me to increase the score to 7.", " We thank the reviewer for the comments and suggestions. We will revise our paper accordingly.\n\n**Q1: Results of KPG-RL without $G$ and the fused model.** \n\nThe results for $L_{kpg}(\\pi)=\\langle M, \\pi \\rangle_F$ and $L_{kpg}(\\pi)=\\langle M\\odot\\pi,G \\rangle_F$ are as follows.\n\nTable R4-1. Results for different definitions of $L_{kpg}(\\pi)$ in HDA experiment.\n\n| Definition of $L_{kpg}(\\pi)$ | $\\langle M, \\pi \\rangle_F$ | $\\langle M\\odot\\pi,G \\rangle_F$ |\n| ----------------------------- | :--------------------------: | :-------------------------------: |\n| KPG-RL | 60.7 | 79.9 |\n| KPG-RL-GW | 60.6 | 79.6 |\n\nWe can see that without $G$, both the results of KPG-RL model and KPG-RL-GW model decrease. This may be because $L_{kpg}(\\pi)=\\langle M, \\pi \\rangle_F$ may not well impose the guidance of keypoints, since it does not model the \"relation\" of each point to keypoints. We will include this experiment in Appendix B.\n\n**Q2: Comparison of different choices for $d$.**\n\nSince $R_k^s$ and $R_l^t$ are in the probability simplex, it is reasonable to measure their difference by a distribution divergence/distance. The widely used distribution divergences/distances include the KL-divergence, JS-divergence, and Wasserstein distance. The KL-divergence is not symmetric, so we need to determine the order of inputs. For the Wasserstein distance, one should define the ground metric first. A possible strategy is to set the ground metric to 0 if the two keypoints are paired, otherwise 1. Such a ground metric makes the Wasserstein distance equal to the $L_1$-distance. In this work, $d$ is taken as the JS-divergence. We compare the performance of different choices of $d$ in the experiment of HDA on Office-31, as in Table R4-2.\n\nTable R4-2. Results of different choices of $d$ in HDA experiment on Office-31.\n\n| Choices of $d$ | A$\\rightarrow$A | A$\\rightarrow$D | A$\\rightarrow$W | D$\\rightarrow$A | D$\\rightarrow$D | D$\\rightarrow$W | W$\\rightarrow$A | W$\\rightarrow$D | W$\\rightarrow$W | Avg |\n| -------------- | :---------------: | :---------------: |:---------------: | :---------------: | :---------------: |:---------------: | :---------------: |:---------------: |:---------------: | :---------------: |\n| KL-ST | 59.0 | 89.7 | **83.6** | 56.8 | 95.2 | **89.0** | 57.7 | 93.6 | 88.1 | 79.2 |\n| KL-TS | 58.1 | 89.0 | 82.3 | 54.2 | 93.9 | 88.1 | 54.2 | 93.2 | **89.4** | 78.0 |\n| $L_1$-distance | 57.4 | 85.8 | 79.0 | 58.0 | 85.8 | 82.9 | 58.4 | 92.6 | 83.6 | 75.9 |\n| $L_2$-distance | 52.3 | 85.8 | 81.3 | 53.2 | 91.3 | 82.3 | 52.6 | 90.3 | 82.9 | 74.7 |\n| GW | 42.0 | 71.6 | 70.0 | 41.6 | 71.0 | 69.4 | 42.3 | 71.3 | 70.0 | 61.0 |\n| JS | **60.0** | **91.6** | **83.6** | **57.4** | **95.8** | 87.7 | **59.1** | **95.2** | 88.4 | **79.9** |\n\nIn Table R4-2, KL-ST and KL-TS denote the KL-divergence $KL(R_k^s, R_l^t)$ and $KL(R_l^t, R_k^s)$ respectively. GW is the Gromov-Wasserstein distance between $ R_k^s$ and $R_l^t$ where the source/target cost is taken as the $L_2$-distance of source/target keypoints. We find that the JS-divergence achieves the best performance, compared with KL-ST, KL-TS, $L_1$-distance, $L_2$-distance, and Gromov-Wasserstein. We will include this experiment in Appendix B, and cite them in Section 4 in the revised paper.\n\n", " **Q3: Motivation to prefer KPG-RL to fused GW model of $\\min_{\\pi\\in\\Pi(p,q)}\\\\{\\alpha L_{gw}(\\pi) + (1-\\alpha) L_w(\\pi)\\\\}$ with $L_w(\\pi)=\\langle M, \\pi \\rangle_F$ or $L_w(\\pi)=\\langle M\\odot G, \\pi \\rangle_F$.**\n\nThe KPG-RL models the guidance of keypoints to the other points in OT by a mask-based constraint on the transport plan to enforce the matching of keypoints and preserving the relation of each point to keypoints. While the above fused GW models may not enforce the matching of keypoints. Table R4-3 implies that the results of the above defined fused GW models are lower than the result of KPG-RL model. \n\nTable R4-3. Results of KPG-RL and fused GW models for HDA on Office-31.\n\n| Methods | KPG-RL | Fused GW (w/ $L_w(\\pi)=\\langle M, \\pi \\rangle_F$) | Fused GW (w/ $L_w(\\pi)=\\langle M\\odot G, \\pi \\rangle_F$) |\n| -------- | :------: | :-------------------------------------------------: | :--------------------------------------------------------: |\n| Accuracy | 79.9 | 25.9 | 73.2 |\n\nFrom the computation point of view, the fused-GW models are non-convex quadratic programs, while KPG-RL is a linear program. Table R4-4 shows that the computation of the KPG-RL takes less time than that of fused GW models. Another experimental finding is that the definition of $L_{kpg}(\\pi)$ affects the convergence speed of the fused GW.\n\nTable R4-4. Time cost for computing KPG-RL and fused GW models in the task A$\\rightarrow$A of HDA experiment on Office-31.\n\n| Methods | KPG-RL | Fused GW (w/ $L_w(\\pi)=\\langle M, \\pi \\rangle_F$) | Fused GW (w/ $L_w(\\pi)=\\langle M\\odot G, \\pi \\rangle_F$) |\n| ------- | :------: | :-------------------------------------------------: | :--------------------------------------------------------: |\n| Time | 6.7s | 98.1s | 25.6s |\n\n**Q4: Range and set of $\\alpha$.**\n\nThanks for this question. The range of $\\alpha$ is $(0,1)$. We simply set $\\alpha$ to 0.5 throughout this paper. This will be added in the revised paper.\n\n**Q5: Clarifying that the target data are not transformed for the GW model.**\n\nWe use GW to learn the transport plan between source and target domain data. Then, we transport the source data using the barycentric mapping. Finally, we train the classifier on the transported source data with labels and labeled target data. We will include these details for GW in Section 5.2 in the revised paper. \n\n**Q6: Making Fig. 4 clearer and including the barycentric mapping in Appendix.**\n\nThanks for the suggestions. We will update Fig. 4. The barycentric mapping is defined as follows. Given the transport plan $\\pi\\in\\Sigma_{m\\times n}$ and source data point $x_{i_0}$, the barycentric mapping [48] is defined as $B_{\\pi}(x_{i_0})=\\arg\\min_y{\\sum_{j=1}^n\\pi_{i_0,j}c(x_{i_0},y_j)}$. Since $c$ is the squared $L_2$-distance in our paper, $B_{\\pi}(x_{i_0})$ has closed-form expression of $B_{\\pi}(x_{i_0})=\\frac{1}{\\sum_{j=1}^n\\pi_{i_0,j}}\\sum_{j=1}^n\\pi_{i_0,j}y_j$. As suggested, we will include the barycentric mapping in Appendix B.\n", " **Q7: Do KPG-RL, KPG-RL-KP, or KPG-RL-GW provide divergences/metrics?**\n\nThanks for the suggestion. Given the annotated correct matched keypoint pairs, we have proved that the KPG-RL-KP provides a proper metric and the KPG-RL-GW provides a divergence. \n\n* For $\\min_{\\pi\\in\\Pi(p,q,M)}{L_{kpg}(\\pi)}=0$, we only have the equality of source and target relation scores (defined in Eqs. (7) and (8)) rather than the equality of source and target points, because the source and target keypoints could be different. Therefore, $\\min_{\\pi\\in\\Pi(p,q,M)}{L_{kpg}(\\pi)}=0$ does not imply $p=q$.\n* When the points lie in the same ground space, the \"correct\" matched keypoint pairs implies that if $p=q$, the paired keypoints must be equal, i.e., for any $(i,j)\\in\\mathcal{K}$, we have $x_i=y_j$. In such a case, $p=q$ if and only if $\\min_{\\pi\\in\\Pi(p,q;M)}{\\langle M\\odot\\pi,\\alpha C + (1-\\alpha)G\\rangle}=0$, which means the KPG-RL-KP provides a divergence. The proof follows the proof idea of the Wasserstein distance (Proposition 2.2 in [r1]). \n* For the KPG-RL-GW model, the \"correct\" matched keypoint pairs implies that if there is an isometric bijection $\\sigma$ between two graphs (modeled as distributions $p$ and $q$), i.e., isomorphism, we have that $\\sigma$ maps the source keypoint to its paired target keypoint. In this case, the two graphs are isomorphic if and only if $\\min_{\\pi\\in\\Pi(p,q;M)}\\\\{\\alpha L_{gw}(M\\odot\\pi) + (1-\\alpha)L_{kpg}(\\pi)\\\\}=0$. The proof follows the proof idea of Theorem 3.2 in the paper of fused GW [35].\n* When the points lie in the same ground space, the KPG-RL-KP model provides a proper metric, if both $c$ and $d$ are distances. The symmetry is easy to verify, because both $C$ and $G$ are symmetric. The triangle inequality follows the proof idea of the Wasserstein distance (Proposition 2.2 in [r1]).\n\nWe will include a Proposition for describing these properties and detailed proofs in Appendix A.\n\n[r1] Peyré G, Cuturi M. Computational optimal transport: With applications to data science[J]. Foundations and Trends® in Machine Learning, 2019, 11(5-6): 355-607.", " \nWe thank the reviewer for the comments. We will revise our paper accordingly.\n\n**Q1: The contribution and difference of the proposed KPG-RL compared with Masked OT [r1].**\n\nThanks for recommending this related work. Though the paper [r1] of the Masked OT is public as an arXiv paper on March 20, 2022, it is accepted by IJCAI-2022, which is held in July 23-29, 2022, after the NeurIPS submission deadline of May 19, 2022. We did not know this work when working on and submitting this paper. The paper [r1] studies the fine-tuning of the graph neural network (GNN). The authors of [r1] propose the Masked OT model as a regularization term to preserve the local feature invariances between fine-tuned and pretrained GNNs. Our paper investigates the guidance of correct matching in OT using a few annotated keypoint pairs. To impose the guidance of keypoints, we use a mask-based constraint on the transport plan to enforce the matching of keypoint pair in OT. We then propose to preserve the relation (defined in Eqs. (7) and (8)) of each point to the keypoints by our KPG-RL model in Eq. (9). Compared with Masked OT [r1], the research problem of our work is different. In methodology, our main contribution is the relation preservation for imposing the guidance of keypoints, which is different from the work of [r1]. For the mask-based modeling, it is utilized to impose the matching of keypoints, which is theoretically guaranteed by Proposition 1. While the mask in [r1] aims to preserve the local information of finetuned network from pretrained models. The motivation and the design of the mask in our approach are different from those in Masked OT [r1]. We will cite [r1] in the related work and the mask-based modeling part of the revised paper. \n\n**Q2: Experimental comparison with Masked OT (KP) and Masked GW.**\n\nWe first compare the Masked GW using the mask designed in Proposition 1, in HDA experiments on Office-31, shown in Table R3-1. \n\nTable R3-1. Results comparison with Masked GW \n\n| | A$\\rightarrow$A | A$\\rightarrow$D | A$\\rightarrow$W | D$\\rightarrow$A | D$\\rightarrow$D | D$\\rightarrow$W | W$\\rightarrow$A | W$\\rightarrow$D | W$\\rightarrow$W | Avg |\n| ------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | -------- |\n| Masked-GW | 41.3 | 71.6 | 69.7 | 41.9 | 71.2 | 69.8 | 40.3 | 71.6 | 69.7 | 60.8 |\n| KPG (w/ dist) | 55.2 | 60.7 | 71.6 | 51.3 | 71.9 | 77.1 | 48.7 | 70.0 | 77.7 | 64.9 |\n| KPG-RL | **60.0** | **91.6** | **83.6** | **57.4** | **95.8** | **87.7** | **59.1** | **95.2** | **88.4** | **79.9** |\n\nFrom Table R3-1, we can see that KPG-RL outperforms Masked-GW by a margin of 19.2%. Compared with Masked-GW, KPG(w/ dist) only retains the difference between distances from each point to keypoints in the summation of Eq. (3), while Masked-GW contains the difference between distances from each points to all points in the summation of Eq. (3). It is interesting that KPG (w/ dist) performs better than Masked-GW. ", " \n**Q3: Explanation of the definition of relation score.**\n\nThe relation scores $R_{k,i_u}^s$ and $R_{l,j_u}^t$ are defined in Eqs. (7) and (8), and illustrated in Fig. 3. Then, based on the relation score, the relation vectors ($R_k^s$ and $R_l^t$) are defined in line 169 of the paper. We next take the example illustrated in Fig. 3 to explain the definition of the relation score. In Fig. 3, the source point with index $k$ is near to the source keypoint with index $i_2$. The target point with index $l$ is near to the target keypoint with index $j_2$, and $(i_2, j_2)$ are indexes of paired keypoints. According to Eq. (7), $R_{k,i_2}^s$ is close to 1 since $C_{k,i_2}^s$ is much smaller than $C_{k,i_1}^s$ and $C_{k,i_3}^s$. While $R_{k,i_1}^s$ and $R_{k,i_3}^s$ are close to 0. As a consequence, $R_k^s$ is close to $(0,1,0)$. Similarly, $R_l^t$ is also close to $(0,1,0)$. Therefore $G_{k,l}=d(R_k^s,R_l^t)$ could be small. By the relation preservation model, i.e., KPG-RL model in Eq. (9), the optimal transport plan $M\\odot \\pi$ has larger entries in the locations where the entries of $G$ are smaller. Hence the cross-domain points corresponding to these locations (e.g. $k$ and $l$ in Fig. 2) that are near to the paired keypoints tend to be matched. Based on the softmax-based formulations in Eqs. (7) and (8), $d(R_k^s,R_l^t)$ is mainly determined by the relation score to the closest keypoint(s), since relation scores to the distant keypoints are smaller or close to 0. This implies that the points are mainly guided by the closest keypoints in our KPG-RL model in Eq. (9). This explanation will be included in the paragraph under \"Modeling the relation to keypoints\" in the revised paper.\n\n**Q4: Ablation study and explanation for the choice of $d$.**\n\nSince $R_k^s$ and $R_l^t$ are in the probability simplex, it is reasonable to measure their difference by a distribution divergence/distance. The widely used distribution divergences/distances include the KL-divergence, JS-divergence, and Wasserstein distance. The KL-divergence is not symmetric, so we need to determine the order of inputs. For the Wasserstein distance, one should define the ground metric first. A possible strategy is to set the ground metric to 0 if the two keypoints are paired, otherwise 1. Such a ground metric makes the Wasserstein distance equal to the $L_1$-distance. In this work, $d$ is taken as the JS-divergence. We compare the performance of different choices of $d$ in the experiment of HDA on Office-31, as in Table R3-3.\n\nTable R3-3. Results of different choices of $d$ in HDA experiment on Office-31.\n\n| Choices of $d$ | A$\\rightarrow$A | A$\\rightarrow$D | A$\\rightarrow$W | D$\\rightarrow$A | D$\\rightarrow$D | D$\\rightarrow$W | W$\\rightarrow$A | W$\\rightarrow$D | W$\\rightarrow$W | Avg |\n| -------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | -------- |\n| KL-ST | 59.0 | 89.7 | **83.6** | 56.8 | 95.2 | **89.0** | 57.7 | 93.6 | 88.1 | 79.2 |\n| KL-TS | 58.1 | 89.0 | 82.3 | 54.2 | 93.9 | 88.1 | 54.2 | 93.2 | **89.4** | 78.0 |\n| $L_1$-distance | 57.4 | 85.8 | 79.0 | 58.0 | 85.8 | 82.9 | 58.4 | 92.6 | 83.6 | 75.9 |\n| $L_2$-distance | 52.3 | 85.8 | 81.3 | 53.2 | 91.3 | 82.3 | 52.6 | 90.3 | 82.9 | 74.7 |\n| GW | 42.0 | 71.6 | 70.0 | 41.6 | 71.0 | 69.4 | 42.3 | 71.3 | 70.0 | 61.0 |\n| JS | **60.0** | **91.6** | **83.6** | **57.4** | **95.8** | 87.7 | **59.1** | **95.2** | 88.4 | **79.9** |\n\nIn Table R3-3, In Table R4-2, KL-ST and KL-TS denote the KL-divergence $KL(R_k^s, R_l^t)$ and $KL(R_l^t, R_k^s)$ respectively. GW is the Gromov-Wasserstein distance between $ R_k^s$ and $R_l^t$ where the source/target cost is taken as the $L_2$-distance of source/target keypoints. We find that the JS-divergence achieves the best performance, compared with KL-ST, KL-TS, $L_1$-distance, $L_2$-distance, and Gromov-Wasserstein. Due to the space limit, we will include this experiment in Appendix B, and cite them in Section 4 in the revised paper.", " \n**Q5: On reproducibility and experimental settings/details**\n\n- **Details of kernel SVM.**\n\n In the kernel SVM, we use the radial basis function kernel $k(x,y)=\\exp(-\\gamma \\|x-y\\|^2)$, where $\\gamma$ is set to the reciprocal of the feature dimension. We use the scikit-learn packadge of python to implement it by simply running the following codes:\n\n clf = SVC(gamma='auto')\n clf.fit(feat_train,label_train)\n\n* **Value of $\\alpha$**.\n\n In this paper, $\\alpha$ is simply set to 0.5.\n\n* **The other hyper-parameters in Section 5.2.**\n\n Apart from $\\alpha$, another hyper-parameter is $\\epsilon$ , which is set to 0.005.\n\n* **On reproducibility**.\n\n We will release the source codes for the experiments in this paper on GitHub.\n\nWe will include these experimental details in Appendix B.\n\n**Q6: Sensitivity to $\\epsilon$.**\n\nThe results for varying $\\epsilon$ are in the following Table R3-4.\n\nTable R3-4. Results for varying $\\epsilon$ .\n\n| $\\epsilon$ | 0.0001 | 0.0005 | 0.001 | 0.005 | 0.01 | 0.05 | 0.1 | 1 |\n| ---------- | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | \n| Accuracy | 83.2 | 83.2 | 83.3 | 83.6 | 82.3 | 76.5 | 74.8 | 71.0 |\n\n**Q7: All labeled source data are transported in Open-set HDA.**\n\nSorry for making this misunderstanding. For Open-set HDA, all the labeled source data are transported to target domain. We will clarify this in Section 5.3 in the revised paper.\n\n**Q8: Correcting the statement on the sensitivity to hyper-parameters.**\n\nThanks. We will remove the statements that \"our method is stable to hyper-parameters.\"\n\n**Q9: The typos.**\n\nThanks for the suggestion. We will update Fig. 2 and the Checklist.", " We thank the reviewer for the comments and suggestions. We will revise our paper accordingly.\n\n**Q1: Comparison with hierarchical OT and TLB, and clarify why use the proposed approach than hierarchical OT and TLB.**\n\nThe hierarchical OT (HOT) [36] splits the data points into some subgroups/clusters and then derives the matching of these subgroups by OT taking the Wasserstein distances between subgroup pairs as the ground metric. Our approach aims to use the annotated keypoint pairs to guide the matching of other points in OT. We use a mask-based constraint on the transport plan to enforce the matching of keypoints and impose the guidance of keypoints by preserving the relation to the keypoints. Compared with HOT, the goal of our approach is different. In methodology, we do not explicitly divide the points into subgroups, and there is no hierarchy in our model (see Eq. (9)). The relation score defined in Eqs. (7) and (8) could be treated as the \"soft assignment\" of each point to keypoints. \n\nTLB [27] is a lower bound of the Gromov-Wasserstein that can be computed faster. TLB takes the ordered distance of each point to all the points in the same domain as features, and then performs the Kantorovich formulation of OT using such features. Differently, our method uses a carefully designed relation (see Eqs. (7) and (8)) of each point to the keypoints to impose the guidance of keypoints to the other points. As shown in experiments, our KPG-RL model outperforms KPG (w/ dist) by a large margin (by 15%) in Table 1, indicating that preserving relation can impose the guidance better than preserving distance. \n\nWe next compare our KPG-RL with HOT and TLB in the HDA experiment on Office-31, as in Tabel R2-1.\n\nTabel R2-1. Results for hierarchical OT, TLB, and KPG-RL in HDA experiment on Office-31.\n\n| | A$\\rightarrow$A | A$\\rightarrow$D | A$\\rightarrow$W | D$\\rightarrow$A | D$\\rightarrow$D | D$\\rightarrow$W | W$\\rightarrow$A | W$\\rightarrow$D | W$\\rightarrow$W | Avg |\n| ---------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | -------- |\n| HOT | 39.0 | 44.8 | 40.0 | 31.3 | 52.6 | 44.8 | 29.7 | 60.0 | 56.5 | 44.3 |\n| TLB | 29.4 | 36.5 | 43.2 | 24.5 | 31.3 | 51.0 | 23.6 | 31.9 | 49.7 | 35.7 |\n| Masked-HOT | 45.2 | 60.3 | 57.4 | 48.9 | 63.5 | 59.2 | 40.3 | 67.1 | 61.4 | 55.9 |\n| Masked-TLB | 42.5 | 66.3 | 64.7 | 38.5 | 68.5 | 65.9 | 43.1 | 68.2 | 67.3 | 58.3 |\n| **KPG-RL** | **60.0** | **91.6** | **83.6** | **57.4** | **95.8** | **87.7** | **59.1** | **95.2** | **88.4** | **79.9** |\n\nNote that when implementing HOT, we first cluster the target data into 31 clusters using k-means where the centers are initialized by the class centers estimated by labeled target data. Then we perform KP on the clusters, in which the ground metric is the Gromov-Wasserstein distance between each source cluster and each target cluster, since the source and target clusters are in different spaces. In Table R2-1, Masked-HOT and Masked-TLB are respectively the variants of HOT and TLB that use our mask-based modeling of transport plan to enforce the matching of labeled data in HOT and TLB. From Table R2-1, we can observe that, with the mask-based constraint, the performances of HOT and TLB are improved but still largely lower than the performance of our KPG-RL by more than 20%. This confirms the importance of the relation for imposing the guidance of keypoints. The methodology comparison with HOT and TLB will be included in the related work, and the experimental comparison will be included in Section 5.2 in the revised paper. \n\nThe above comparison indicates that our method is more suitable than HOT or TLB for the applications that a few paired keypoints could be defined. For these applications, our approach can better use the keypoints to guide the correct matching of the other points.\n\n**Q2: Sensitivity to source keypoints.**\n\nIn the experiments of the paper, the source keypoints are taken as the source class centers. To study the sensitivity to the source keypoints, we randomly sample one data point from each class as a keypoint to construct the source keypoints. We run the experiments with five different samplings for constructing the source keypoints (these five runs are denoted as S1, S2, S3, S4, S5 respectively). The results are reported in the following Table R2-2. ", " Table R2-2. Results for different source keypoints.\n\n| S1 | S2 | S3 | S4 | S5 | Centers |\n| :----:| :----:| :----:| :----:| :----: | :--------: |\n| 76.8 | 77.5 | 78.2 | 77.8 | 76.9 | **79.9** |\n\nFrom Table R2-2, we can see that using the class center as the keypoints achieves the best results, compared with randomly sampling one data point per class as the keypoints. This may be because the class centers are estimated using all the data of each class, and these centers can better represent each class than a randomly sampled data point of each class. This experiment will be included in Appendix B and cited in Section 5.2 in the revised paper.\n\n**Q3: Extending the proposed approach to deep learning using mini-batch-based implementation.**\n\nTo extend our method to the mini-batch-based implementation, the main challenge is that some of the samples in the mini-batch may not be matched. For instance, in domain adaptation, the categories of some samples in the source mini-batch may not be present in the target mini-batch, and thus these source samples should not be transported/matched. Inspired by [r1] that uses partial OT over the mini-batch data to implement deepJDOT [r2], we use our partial KPG-RL-KP model to partially match the mini-batch data in the training of the deep network. The partial KPG-RL-KP model is modified from Eq. (13) by replacing $G$ by $\\alpha C + (1-\\alpha) G$. As an experimental example, we apply the partial KPG-RL-KP to the unsupervised domain adaptation experiment on the Office-Home dataset. We take the source and target class centers of the same class as a keypoint pair. The centers are online updated by exponential moving average in training, same as in [r2]. We use the pseudo labels of target data to update the target class centers, due to the lack of target labels. The protocol is the same as that in [r1]. The batch size is set to 65 and the total transport mass ($s$ in Eq. (13)) is set to 0.6, which are the same as those in [r1]. The results are reported in the following Table R2-3. \n\nTable R2-3. Results for unsupervised domain adaptation.\n\n| Method | A$\\rightarrow$C | A$\\rightarrow$P | A$\\rightarrow$R | C$\\rightarrow$A | C$\\rightarrow$P | C$\\rightarrow$R | P$\\rightarrow$A | P$\\rightarrow$C | P$\\rightarrow$R | R$\\rightarrow$A | R$\\rightarrow$C | R$\\rightarrow$P | Avg |\n| ---------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------- |\n| ROT [r3] | 47.20 | 71.80 | 76.40 | 58.60 | 68.10 | 70.20 | 56.50 | 45.00 | 75.80 | 69.40 | 52.10 | 80.60 | 64.30 |\n| m-OT [r4] | 51.75 | 70.01 | 75.79 | 59.60 | 66.46 | 70.07 | 57.60 | 47.88 | 75.29 | 66.82 | 55.71 | 78.11 | 64.59 |\n| m-UOT [r5] | 54.99 | **74.45** | **80.78** | 65.66 | 74.93 | 74.91 | 64.70 | 53.42 | 80.01 | 74.58 | 59.88 | 83.73 | 70.17 |\n| m-POT [r1] | 55.65 | 73.80 | 80.76 | **66.34** | 74.88 | **76.16** | 64.46 | 53.38 | **80.60** | 74.55 | 59.71 | 83.81 | 70.34 |\n| **m-KPG-RL-KP** | 52.13 | 63.65 | 74.53 | 61.12 | 67.84 | 67.88 | 59.84 | 52.93 | 76.90 | 71.92 | 59.21 | 82.55 | 65.88 |\n| **m-PKPG-RL-KP** | **57.96** | **74.45** | 78.75 | 66.30 | **75.22** | 74.39 | **66.87** | **58.47** | 80.47 | **75.15** | **61.15** | **84.23** | **71.12** |\n\nIn Table R2-3, ROT [r3] is a robust OT method. m-OT is the direct mini-batch implementation of deepJDOT [r4]. m-UOT [r5] and m-POT [r1] are respectively unbalanced deepJDOT and partial deepJDOT on mini-batch data. m-KPG-RL-KP is the direct mini-batch implementation of our KPG-RL-KP model. m-PKPG-RL-KP is the mini-batch implementation of our partial KPG-RL-KP model. We can see that by partially matching the samples in the mini-batches, m-KPG-RL-KP outperforms m-KPG-RL-KP by a margin of 6.24%. Our partial KPG-RL-KP (m-PKPG-RL-KP) outperforms partial DeepJDOT (m-POT) by 0.68%, indicating that using partial matching, our approach is effective for unsupervised domain adaptation under mini-batch implementation. This experiment will be included in Appendix B and cited in Section 5 in the revised paper.", " **Q4: Could the proposed KPG-RL approach be extended to the case where the number of keypoints in the source and target distributions are different?**\n\nWe take the case that the target keypoint number is larger for illustration. In this case, since the keypoints are paired, there may be several target keypoints matched to the same source keypoint. To extend our approach to this situation, we can replace these target keypoints using their centers. As a result, the keypoints of source and target domains are one-to-one paired. In the experiments in Table 2 of the paper, 2/3 labeled target samples (target keypoints) for one class are given, which should be matched to one source class center (source keypoint). In the implementation, we use the center of these 2/3 labeled target samples as target keypoint, which is matched to the source class center of the same class as the target samples. The results in Table 2 of the paper show that our approach is effective for HDA.\n\n**Q5: Do the competitors use some labeled target samples?**\n\nAll the compared HDA methods use the same given labeled target samples. Moreover, all the methods are implemented in five runs. In each run, all the methods use the same training data, including the same labeled source domain data, labeled target domain data, and unlabeled target domain data. We will clarify this in Section 5.2 in the revised paper. \n\n**Q6: The typos.**\n\nThanks for these suggestions. We will correct them in the revised paper.\n\n\n\n[r1] Nguyen K, Nguyen D, Pham T, et al. Improving mini-batch optimal transport via partial transportation, ICML, 2022.\n\n[r2] Xie S, Zheng Z, Chen L, et al. Learning semantic representations for unsupervised domain adaptation, ICML, 2018.\n\n[r3] Balaji Y, Chellappa R, Feizi S. Robust optimal transport with applications in generative modeling and domain adaptation, NeurIPS, 2020.\n\n[r4] Damodaran B B, Kellenberger B, Flamary R, et al. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation, ECCV, 2018.\n\n[r5] Fatras K, Séjourné T, Flamary R, et al. Unbalanced minibatch optimal transport; applications to domain adaptation, ICML, 2021.", " We thank the reviewer for the comments. We will revise our paper accordingly.\n\n**Q1: Memory and time cost of KPG-RL, and the peak memory consumption in the experiments with Office-31.**\n\nWe report the memory and time cost of KPG-RL with different sizes of the guiding matrix $G$ in the bottom row of Tables R1-1 and R1-2 respectively. For comparisons, we also report the memory and time cost of the Kantorovich Problem (KP). KP needs to calculate the pair-wised cost matrix $C$, as in Eq. (1). KPG-RL calculates the relation score defined in Eqs. (7) and (8), and then computes the guiding matrix $G$, as in line 174. Since we have deduced Sinkhorn's algorithm for solving KPG-RL, we solve both KP and KPG-RL using Sinkhorn's algorithm with $\\epsilon=0.005$. Table R1-1 shows that KPG-RL costs a slightly larger memory than KP. Table R1-2 shows the computational time for solving KPG-RL and KP problems. \n\nTable R1-1. __Peak memory__ for computing KP and KPG-RL.\n\n| Size ($m\\times n$) of $C$/$G$ | $500\\times 500$ | $1000\\times 1000$ | $2000\\times 2000$ |\n| ----------------------------- | :-------------: | :---------------: | :---------------: |\n| **KP** | 201M | 218M | 330M |\n| **KPG-RL** | 207M | 232M | 378M |\n\nTable R1-2. __Time cost__ for computing KP and KPG-RL.\n\n| Size ($m\\times n$) of $C$/$G$ | $500\\times 500$ | $1000\\times 1000$ | $2000\\times 2000$ |\n| ----------------------------- | :-------------: | :---------------: | :---------------: |\n| **KP** | 6.9s | 27.7s | 60.1s |\n| **KPG-RL** | 10.8s | 42.1s | 76.5s |\n\nIn the experiment on Office-31, the maximum memory of $G$ is 38M, and the peak memory of the running process is 780M. The discussion on memory and time cost will be included in Appendix B and cited in Section 5.2 of the revised paper.\n\n**Q2: Difference of ideas of the proposed keypoint-guided OT to the semi-supervised OT proposed in [9].**\n\nThe main difference of our keypoint-guided OT to the semi-supervised OT proposed in [9] is on the formulation of guidance of keypoints to the other points. Our keypoint-guided OT aims to use a few annotated keypoint pairs to guide the correct matching of the other points in OT. To realize this goal, we first use a mask-based constraint on the transport plan to enforce the matching of keypoints pairs, as illustrated in Fig. 2. We then preserve the relation (defined in Eqs. (7) and (8)) of each point to the keypoints to impose the guidance of keypoints. The semi-supervised OT [9] constrains the cost function to encourage the matching of labeled data across source and target domains that share the same class labels. [9] uses the Laplacian regularization to preserve the data structure, different from our keypoint-guided OT that explicitly models the guidance of keypoints matching to the other data points in our OT formulation, optimized by Sinkhorn's algorithm. We experimentally compare the matching accuracy using the toy data in Section 5.1 of the paper. The matching accuracy is computed as in lines 191-194 in the Appendix.\n\nTable R1-3. Matching accuracies of semi-supervised OT [9] and KPG-RL-KP on toy data.\n\n| Number of keyponit pairs | 0 | 2 | 3 | 20 | 30 |\n| ------------------------ | -------- | -------- | -------- | ------- | ------- |\n| Semi-supervised OT [9] | **41.7** | 41.7 | 41.7 | 58.3 | 66.7 |\n| KPG-RL-KP | **41.7** | **81.7** | **96.7** | **100** | **100** |\n\nFrom Table R1-3, it can be observed that given a few (2 or 3) labeled keypoint pairs, our proposed KPG-RL-KP can apparently improve the matching accuracy (by 40% or 55%). While the semi-supervised OT [9] can not improve the matching performance with 2 or 3 labeled keypoint pairs. When the number of labeled keypoint pairs increases (to 20 or 30), the matching performance of semi-supervised OT [9] is improved, but worse than our approach (by more than 30%).\nWe will include the methodology comparison in the related work of the revised paper, and the experimental comparison in Appendix B.", " **Q3: About encoding the relation in the mask.**\n\nIn this paper, we use a mask-based constraint on the transport plan to enforce the matching of keypoints (which is proved in Proposition 1). As illustrated in Fig. 2, we model the transport plan $\\tilde{\\pi}$ as the Hadamard product of a mask matrix $M$ and a matrix $\\pi$, defined as in Eq. (4). After that, we define the relation score of each point to the keypoints (as in Eqs. (7) and (8)), which is preserved by our KPG-RL model to impose the guidance to the other points. The relation score for each point in Eqs. (7) and (8) is defined based on the distances of the point to all the keypoints, modeling the \"relation\" of points to the keypoints. We have tried, however, it is non-trivial to model the \"relation\" to the keypoints by purely designing the mask matrix $M$, and this idea will be left in our future work. \n\n**Q4: Clarifying line 138.**\n\nThanks for this question. It is true that the Kantorovich problem does not require the one-to-one match. The requirements in line 138 are for the given paired keypoints. However, based on our mask-based OT formulation, the remaining points excluding keypoints are not required to be one-to-one match. Specifically, as we stated in line 133, $\\mathcal{K}$ is the set of indexes of the keypoint pairs. For any $(i,j)\\in\\mathcal{K}$, $i$ and $j$ are respectively the indexes of keypoints $x_i$ and $y_j$ that are paired and should be matched in OT. Therefore, $x_i$ and $y_j$ satisfy that all mass of point $x_i$ should be transported to $y_j$ and $y_j$ can only receive the mass from $x_i$, which is realized by the mask-based constraint on the transport plan. We will make this clearer in the revised paper.\n\n**Q5: On the choice of $\\tau$.**\n\nWe set $\\tau$ to $\\rho*\\max_{i,k}C^s_{i,k}$. Therefore, in Eq. (7), the distances are divided by their maximum value, and normalized to $[0,1]$, which may increase the robustness of the relation score to the scale of the distances. $\\rho$ is a tunable parameter that controls the \"sharpness\" of the relation vector $R_k^s$ defined in line 169. If $\\rho$ is smaller, all the relation vectors may be closer to \"one-hot\" vectors. If $\\rho$ is larger, $R_k^s$ may be closer to uniform probability vectors. In this paper, we empirically set $\\rho$ to 0.1, because 0.1 is a commonly used temperature in the softmax function, e.g., in [r2] and [r3] where the absolute value of the input of softmax is smaller than 1. Please refer to Table A-3 in Appendix B.2 for the effect $\\rho$. We will include this explanation in the paragraph under \"Modeling the relation to keypoints\" in the revised paper.\n\n**Q6: Clarifying the definition of $R_{k,i_u}^s$ and $G_{k,l}=d(R_k^s,R_l^t)$, and the reasons for using these discrepancies to construct the relation preservation model.**\n\nThe relation scores $R_{k,i_u}^s$ and $R_{l,j_u}^t$ are defined in Eqs. (7) and (8), and illustrated in Fig. 3. Based on the relation score, the relation vectors ($R_k^s$ and $R_l^t$) are defined in line 169 of the paper. We next take the example illustrated in Fig. 3 to explain the definition of the relation score. In Fig. 3, the source point with index $k$ is near to the source keypoint with index $i_2$. The target point with index $l$ is near to the target keypoint with index $j_2$, and $(i_2, j_2)$ are indexes of paired keypoints. According to Eq. (7), $R_{k,i_2}^s$ is close to 1 since $C_{k,i_2}^s$ is much smaller than $C_{k,i_1}^s$ and $C_{k,i_3}^s$. While $R_{k,i_1}^s$ and $R_{k,i_3}^s$ are close to 0. As a consequence, $R_k^s$ is close to $(0,1,0)$. And similarly, $R_l^t$ is close to $(0,1,0)$. This implies that $R_k^s$ and $R_l^t$ could be similar, and then $G_{k,l}=d(R_k^s,R_l^t)$ could be small. By the relation preservation model, i.e., KPG-RL model in Eq. (9), the optimal transport plan $M\\odot \\pi$ has larger entries in the locations where the entries of $G$ are smaller. Hence the cross-domain points corresponding to these locations (e.g. $k$ and $l$ in Fig. 2) that are near to the paired keypoints tend to be matched. Based on the softmax-based formulations in Eqs. (7) and (8), $d(R_k^s,R_l^t)$ is mainly determined by the relation score to the closest keypoint(s), since relation scores to the distant keypoints are small or close to 0. This implies that the points are mainly guided by the closest keypoints in our KPG-RL model in Eq. (9). We will include this explanation in the paragraphs under \"Modeling the relation to keypoints\" and \"Keypoint-guided model\" in the revised paper. (The rest response is in part (3/3))", " Regarding to $d$, since $R_k^s$ and $R_l^t$ are in the probability simplex, it is reasonable to measure their difference by a distribution divergence/distance. The widely used distribution divergences/distances include the KL-divergence, JS-divergence, and Wasserstein distance. The KL-divergence is not symmetric, so we need to determine the order of inputs. For the Wasserstein distance, one should define the ground metric first. A possible strategy is to set the ground metric to 0 if the two keypoints are paired, otherwise 1. Such a ground metric makes the Wasserstein distance equal to the $L_1$-distance. In this work, $d$ is taken as the JS-divergence. We compare the performance of different choices of $d$ in the experiment of HDA on Office-31, as in Table R1-4.\n\nTable R1-4. Results of different choices of $d$ in HDA experiment on Office-31.\n\n| Choices of $d$ | A$\\rightarrow$A | A$\\rightarrow$D | A$\\rightarrow$W | D$\\rightarrow$A | D$\\rightarrow$D | D$\\rightarrow$W | W$\\rightarrow$A | W$\\rightarrow$D | W$\\rightarrow$W | Avg |\n| -------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | -------- |\n| KL-ST | 59.0 | 89.7 | **83.6** | 56.8 | 95.2 | **89.0** | 57.7 | 93.6 | 88.1 | 79.2 |\n| KL-TS | 58.1 | 89.0 | 82.3 | 54.2 | 93.9 | 88.1 | 54.2 | 93.2 | **89.4** | 78.0 |\n| $L_1$-distance | 57.4 | 85.8 | 79.0 | 58.0 | 85.8 | 82.9 | 58.4 | 92.6 | 83.6 | 75.9 |\n| $L_2$-distance | 52.3 | 85.8 | 81.3 | 53.2 | 91.3 | 82.3 | 52.6 | 90.3 | 82.9 | 74.7 |\n| GW | 42.0 | 71.6 | 70.0 | 41.6 | 71.0 | 69.4 | 42.3 | 71.3 | 70.0 | 61.0 |\n| JS | __60.0__ | __91.6__ | __83.6__ | __57.4__ | __95.8__ | 87.7 | __59.1__ | __95.2__ | 88.4 | __79.9__ |\n\nIn Table R1-4, KL-ST and KL-TS denote the KL-divergence $KL(R_k^s, R_l^t)$ and $KL(R_l^t, R_k^s)$ respectively. GW is the Gromov-Wasserstein distance between $ R_k^s$ and $R_l^t$ where the source/target cost is taken as the $L_2$-distance of source/target keypoints. We find that the JS-divergence achieves the best performance, compared with KL-ST, KL-TS, $L_1$-distance, $L_2$-distance, and Gromov-Wasserstein. Due to space limit, we will include this experiment in Appendix B, and cite it in Section 4.\n\n[r1] Nguyen K, Nguyen D, Pham T, et al. Improving mini-batch optimal transport via partial transportation, ICML, 2022.\n\n[r2] Chen T, Kornblith S, Norouzi M, et al. A simple framework for contrastive learning of visual representations, ICML, 2020.\n\n[r3] Khosla P, Teterwak P, Wang C, et al. Supervised contrastive learning, NeurIPS, 2020.\n", " The paper introduces a semi-supervised OT formulation and its solutions and applied it to solving heterogeneous domain adaptation. The weak supervision comes from limited labeling of a set of key points in source and target domains. The authors constructed a key point-guided model by the doc product of the transport plan masked by the binary key point connection and a relation matrix between source and target. They then appended the model to the OT formulation as a weighted regularization term. Finally, they extended their model to partial OT problems. Experiments on Office-31 showed that the proposed model achieved better results overall than several existing methods. + The formulations (11-13) are novel to the best of my knowledge.\n+ Theorem 1 looks good. The authors solved their model on partial OT problems by adding dummy components to the formulation and later proved that solution to the original problem can be easily derived from the solution to the reformulated problem and the reformulated problem is solvable.\n\n- The new formulation is quite expansive in memory (perhaps in time as well) because of additional pair-wise discrepancies and the authors didn't discuss that at all.\n- The authors didn't discuss the difference between their key point-guided formulation and semi-supervised OT. I feel that $L_{kpg}$ can be directly incorporated into $L_{OT}$ by designing a more informative mask $M$ that directly encodes the \"relation\". It looks redundant to me that the authors had to design a binary mask but then treat it as a regularizer not a constraint. Perhaps it's a dead end but I feel in general the authors didn't clearly explain their motivation and ideas. ---\nWhat's the difference between the idea of key point-guided OT and the idea of semi-surprised OT in OTDA by Courty et al. [9]?\n\n---\nLine 138: \"If the paired key points (I,j) are matched, ... all mass of point $x_i$ must be transported to $y_j$ and $y_j$ can only receive the mass from $x_i$.\"\n\nI miss the motivation of this idea. Since we're solving Kantorovich OT, how do we expect the connection between key points and non-key points are one-to-one?\n\n---\nLine 161: Why do we set it to $0.1 \\times \\max_{I,k}\\{C_{I,k}^{s}\\}$? Is $\\tau$ tunable and thus cross-validated?\n\nWhere do $R_{k, i_u}^{s}$ and $G_{k,l}=d(R_k^s, R_l^t)$ come from? Why do we choose these discrepancies to construct the relation?\n\nPlus, $G$ would take quite a lot of memory if $m$ and $n$ are large. What is the peak memory consumption during the experiments with Office-31? No negative societal impact found. See above for limitations.", " Paper proposes to guide the optimal transport plan by imposing a priori some couplings between \"keypoints\" into the transport matrix. The relation to each keypoint is also used to define a \"guiding\" matrix that is considered as the cost matrix of the transport problem. Several variants are proposed, such as a GW- or a partial formulation of the problem. The experimental setup considers the Heteregeneous Domain Adaptation (HDA) scenario, in which it is assumed that some points (of all the classes) of the target distribution are labelled. Paper lies in a line of works that aims at constraining the OT problem to encode some extra information or to fasten the computation. Regarding the use of key points, it is quite common to use few anchor points to supervise some of the matching between points or distributions. From a computational point of view, the problem is rewritten by introducing a mask matrix and an algorithm is proposed to solve the problem. To my knowledge, introducing labeled key points to constrain the matching is original. In the context of HDA, experiments show that it is not enough to beat SotA methods. To improve the classification performances, a dedicated cost matrix is constructed. It is based on a function of the distance of each point to each key points within each domain. \nThe methodology shares similarities with some OT variants, notably with hierarchical OT. In the latter, instead favoring matchings between some keypoints, a matching between subgroups (e.g. clusters) is sought. It also shares some similarities with TLB (« third lower bound » of Gromov-Wasserstein), in which each point of each domain is described as a set of distances to all the points of the same domain, and a « wasserstein of wasserstein » of these distances is computed. The originality and advantages of this « relation preservation » guiding matrix w.r.t. those alternatives is not discussed. No comparison of the performances is neither performed. This constitutes the main weakness of the paper as this guiding matrix seems to be the main ingredient to improve the performances. \nThe experimental validation of the method is performed on a heterogeneous domain adaptation context. In this context, all the labels from the source distribution are known and only few of the target ones are provided. It is shown that adding only the keypoints to drive the matching is not enough to beat the SotA (10 point behind the best competitor) but that adding the guiding matrix improves the results. Additional experiment shows that performances increase when the number of keypoints. It is unclear if the competitors also use some labeled points to drive the learning, which makes the results with the proposed method and SotA difficult to compare. \n\n\nStrengths of the paper:\n- the paper is really well written and easy to follow; figures illustrate clearly the proposed method.\n- the methodology is sound and performs favorably in the considered HDA scenario.\n- extensions with entropic regularization or partial OT is provided.\n\nWeaknesses:\n- insufficient comparison with similar OT variants such as hierarchical OT and TLB\n- experimental setup that does not fit exactly the proposed method: while the hypothesis that we have access to labels of some of the target distribution makes sense, it is unclear how the method is sensitive to the choice of the keypoints within the source domain, or if it could be possible to use all the labeled information of this source domain. \n- it is unclear how the setting can be extended to deep method, where mini batches are used. \n\n\nMinor comments :\n- «  Fuzzed OT » should read « fused OT »\n- « Frank-Walfe » algorithm\n- «  Offce 31 »\n Could you clarify \n- why one should use your method than hierarchical OT or TLB?\n- could it be extended to the case where the number of keypoints in the source and target distributions are different? n/a", " This paper utilizes annotated keypoints to tackle incorrect matchings of optimal transport models. First, the authors impose a mask-based constraint on the optimal transport problem to preserve the matching of keypoint pairs. Secondly, they propose to preserve the relation of each data to the keypoints by defining a new cost matrix. Their method, named KeyPoint-Guided model by ReLation preservation (KPG-RL), can be solved by the Sinkhorn’s algorithm and even supported in incomparable spaces. Furthermore, they extend KPG-RL to the partial OT setting. Finally, they verify the effectiveness of their proposal in the heterogeneous domain adaptation application, both close-set and open-set. Though the mask-based constraint version of OT in this paper is not completely new (more details in the next section), the proposed solution, KPG-RL, in this paper is novel and interesting. In terms of the theoretical results, they are simply the extensions of previous works. Moving onto the experimental parts, KPG-RL shows a noticeable improvement over baseline methods in heterogeneous domain adaptation. In addition, the authors also conducted an ablation study to show the stability of their proposal for different choices of hyper-parameters. Overall, this paper is easy to follow with some good illustrations which are really helpful in explaining the concepts. Two major concerns include the lack of references to related works and experimental settings. I have the following questions:\n1. **Related work**\n* The OT problem in “Preservation of matching of keypoints in transport” is similar to the Masked Optimal Transport (Masked OT) [r1]. Please clarify the difference and contribution of this paper compared to the “previous” work.\n* In the experimental part, a comparison with Masked OT and Masked GW [r1] can be conducted to illustrate the advantages of the proposed methods. For instance, KPG (w/dist) in Section 5.2 is a good candidate. \n[r1] Zhang, Jiying, et al. \"Fine-tuning graph neural networks via graph topology induced optimal transport.\" arXiv preprint arXiv:2203.10453 (2022).\n\n2. **Relation score.** What is the explanation/intuition/inspiration for defining the relation score in Equation 7? Because there are several forms that can satisfy the properties in L170-172. The authors should discuss the choice of the relation score in more detail because, in my opinion, it is one of the major contributions compared to Masked OT.\n\n3. **Cost function.** An ablation study or an explanation for the choice of d, which is set to Jensen-Shannon in this paper, is recommended. \n\n4. **Reproducibility.** A lot of experimental settings are missing and there is no code for reproduction?\n* The details of the kernel SVM are missing? \n* What is the value of $\\alpha$ for each KPG-based method in Section 5.1?\n* What is the value of hyper-parameters in Section 5.2?\n\n5. **Experiments.** \n* *Sensitivity to $\\epsilon$.* The range of $\\epsilon$ in Appendix B.2 should be wider, e.g. $\\epsilon = 0.001, 0.01, 0.1, 1, \\ldots$.\n* *Open-set HDA.* Does the set of common labels of the target domain contain or equal the label set of the source domain? If it is the case, it is reasonable to transport more than 1 - $\\eta$-proportion (even all) of labeled source domain data. This could be a plausible explanation for the finding in L227-228 in Appendix B.3.\n* *Sensitivity to hyper-parameters.* From Tables A-2, A-3, and especially A-5, it is difficult to conclude that KPG-RL is stable to different choices of hyper-parameters. \n\n**Minors**\n* In Figure 2, the entry in row 3 and column 6 is 0, which is missing. \n* The line numbers in Checklist are outdated.\n\nI am happy to increase my score if the above questions are adequately addressed.\n\n The authors stated that the limitation of their method is in L184-185 but I think what they meant was L179-182. Other than that, one additional limitation of KPG-RL is the introduction of new hyper-parameters, which is the temperature $\\tau$ and the dissimilarity function $d$. Together with $\\epsilon$ in the entropic regularization, users have to tune a lot to find the best set of hyper-parameters. \n", " This paper presents the KeyPoint-Guided model by ReLation preservation (KPG-RL), which allows to combine annotation information into many popular optimal transport (OT) models. The authors show the very competitive performance when integrating KPG-RL into the classical OT models, for the heterogeneous domain adaptation task. The paper is well written and the authors have well illustrated the advantage and motivation of KPG-RL over the traditional GW and KP models.\n\nHowever, I would say that the main contribution is mostly on the methodology, so more theoritical understand would be desirable. Apart from a few theoritical results which help validating the intuition and motivation, I think it is also necessary to convince that KPG-RL is a reliable divergence/metric (even though the empirical evidence suggests that it is). More precisely, given prior correct matching of keypoint pairs,\n\n- In the setting of OT distance, where points lie on the same ground space, do we have that $L_{kpg} = 0$ (or $\\text{KPG-RL-KP} = 0$) is equivalent to the equality of two reference measures $p$ and $q$?\n\n- When KPG-RL is incorporated in the GW model (i.e. KPG-RL-GW), does it still preserve the isomorphism?, i.e. when the two graphs are isomorphic (in the usual sense of GW distance), do we have $\\text{KPG-RL-GW} = 0$, and vice versa?\n\n- Can we show that KPG-RL or any of its variations (GW and LP) defines a proper metric?\n\nMoreover, while the usual GW model performs very poorly because it can not incorporate prior information, the fused GW model $[35]$ can and should be considered as competing method. For example, one can consider the fused GW model of the form: $\\min_{\\pi \\in \\Pi(p,q)}\\alpha L_{gw}(\\pi) + (1 - \\alpha) L_w(\\pi)$, where either $L_w(\\pi) = \\langle M, \\pi \\rangle$, or $L_w(\\pi) = \\langle M \\odot G, \\pi \\rangle$. - I find the idea of introducing $G$ interesting and smart. So I am curious to see how would KPG-RL perform without $G$?, i.e. what if we only use $L_{kpg}(\\pi) = \\langle M, \\pi \\rangle_F$?\n\n- Each entry in the matrix $G$ is the Jensen-Shannon (JS) divergence between two probability vectors. How does the choice of measure of similarity impact the performance of KPG-RL? Why JS divergence but not the KL divergence, or $L_2, L_1$ distances, or even Wasserstein distance (which is not computationally expensive in this case).\n\n- Do the authors have the motivation to prefer the KPG-RL to the fused GW model above?\n\n- What is the range of $\\alpha$ in the equations $11$ and $12$? Is it $[0,1]$? Is it tuned during the training?\n\n- The alignment in the Fig 4 is not very visible.\n\n- In the experiment 5.2, how is the target data transformed for the GW model?\n\n- In the Appendix, please provide more details on the barycentric mapping in $[48]$. No, the authors do not discuss these points." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "zhldE7vuPVYi", "mYEOe1R7G8N", "96Z5_0HBj_U", "4bDQY_hKtV1", "rZZVsw-bXt3", "OKXG7aBUr_6_", "l5A_8GUNCfA", "li4O7btuyS", "li4O7btuyS", "li4O7btuyS", "eOO2QON-zu6", "eOO2QON-zu6", "eOO2QON-zu6", "vvrcEPf0Bs", "vvrcEPf0Bs", "vvrcEPf0Bs", "57aGvEl2wyP", "57aGvEl2wyP", "57aGvEl2wyP", "nips_2022_m6DJxSuKuqF", "nips_2022_m6DJxSuKuqF", "nips_2022_m6DJxSuKuqF", "nips_2022_m6DJxSuKuqF" ]
nips_2022_qSYVigfakqS
Weak-shot Semantic Segmentation via Dual Similarity Transfer
Semantic segmentation is a practical and active task, but severely suffers from the expensive cost of pixel-level labels when extending to more classes in wider applications. To this end, we focus on the problem named weak-shot semantic segmentation, where the novel classes are learnt from cheaper image-level labels with the support of base classes having off-the-shelf pixel-level labels. To tackle this problem, we propose a dual similarity transfer framework, which is built upon MaskFormer to disentangle the semantic segmentation task into single-label classification and binary segmentation for each proposal. Specifically, the binary segmentation sub-task allows proposal-pixel similarity transfer from base classes to novel classes, which enables the mask learning of novel classes. We also learn pixel-pixel similarity from base classes and distill such class-agnostic semantic similarity to the semantic masks of novel classes, which regularizes the segmentation model with pixel-level semantic relationship across images. In addition, we propose a complementary loss to facilitate the learning of novel classes. Comprehensive experiments on the challenging COCO-Stuff-10K and ADE20K datasets demonstrate the effectiveness of our method.
Accept
Two reviewers give a weak accept rating while the other one gives a borderline reject rating. Considering the low confidence of the negative comment and the contrary comments in paper writing (confident "easy to follow" vs. unconfident "hard to understand"), the AC would lean to accept this paper.
train
[ "6tL5m967tuS", "zhIzL9_aRYR", "yD1rQRJEUJ3", "-qBnk7OyXIt", "vkMi8Kpqdqr", "Wbs17t_rXfQ", "OsG_3CTVfIF", "Zu4-3OUyfgI", "mzgIQU22N2R", "ZcVvYUbnWf" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your recommendation. We are pleased to release the code and model for future researchers to further explore.", " Thanks for the response. \n\nThe authors have replied my question and resolved my concerns. I am happy to recommend this paper to be accepted.\n\nIn addition, the proposed method have multiple modules and steps, it is suggested the authors can release the source and trained model if it get accepted finally, which would be easier for future researchers to reproduce.", " Thanks for your encouragement and recognizing our contribution!", " Thank the authors for the response. \nThe rebuttal resolves most of my concerns about the contributions. Considering the contributions of this work, although somewhat incremental, I believe it is meaningful in WSSS. \nThe authors have revised the manuscript: 1. clarified the contributions (Q1), 2. provided explanations about how Maksformer correlates with similarity transfer (Q2), 3. made the illustrations in Fig. 2 clearer (Q3), 4. improved the quality of the figures (Q5). \nTherefore, I would like to increase my score from 4 to 6. \n\n", " __Q1__. In view of the contribution of this work, the merit of the proposed work is unclear. For instance, in Line 77, contribution 1: \"...(dual similarity transfer framework) is effective for proposal-pixel similarity transfer\". It is meaningless since the \"proposal-pixel similarity transfer\" is one of the components of the dual similarity transfer framework.\n\n__A1__. We choose to build our framework on MaskFormer, and MaskFormer lays the foundation for proposal-pixel similarity transfer. As introduced in Line 52-55, MaskFormer disentangles segmentation task into two sub-tasks: proposal classification and proposal segmentation, in which proposal segmentation corresponds to proposal-pixel similarity. We transfer such similarity from base classes to novel classes to obtain the high-quality segmentation results of novel classes, as shown in Tab. 2 and Fig. 4. Furthermore, our framework built on MaskFormer is more elegant, compared with the complex pipeline of RETAB [43] and WSSS methods [33,36] built on CAM [41], as discussed in Line 103-107. We have revised the summary of contribution 1 to make it more easily understood.\n\n---\n\n__Q2__. I don't see a clear relation with the idea of \"similarity transfer.\" Could there be more explanation about why it is called proposal-pixel similarity transfer?\n\n__A2__. As for “similarity”, the binary mask is produced by dot-product-sigmoid (See Line 132-134) between proposal embedding and pixel embeddings. Each entry in the binary mask indicates whether the proposal embedding is similar to each pixel embedding. As for “transfer”, proposal-pixel similarity is learned on base classes using GT masks of base classes and transferrable to novel classes. With novel proposal embeddings and proposal-pixel similarity, we can obtain the binary masks for novel classes by dot-product-sigmoid. Such similarity is pairwise semantic similarity and thus transferrable across different categories, which has also been exploited before [6,29,4]. Our experiments (Tab. 2,3) also provide in-depth support for the transferability.\n\n---\n\n__Q3__. Symbols are not reflected in Figure 2, making it hard to correspond with the text description. And there are too many natural language descriptions in Section 3.3. I think a formal mathematical description is needed to help understand how the method works.\n\n__A3__. Thanks for your advice. We have modified Fig. 2 with symbols and improved the mathematical description in Section 3.3. We also have carefully revised the paper to make it more easily understood.\n\n---\n\n__Q4__. In addition, it seems that the proposed approach explicitly utilizes the \"ignore labels\" for learning masks of novel categories in Pixel-Pixel Similarity and Complementary Loss. I wonder whether it would cause the mask information leakage of the novel classes? If the novel and base classes do not appear in the same image, would the Complementary Loss fail to work? From another view, the results are close to the FullyOracle result on some split. Is it because of the possible information leakage?\n\n__A4__. There is no information leakage of the novel classes, because all the not-base pixels (including both novel and ignore) are labeled as class 255 in implementation as stated in Line 221-223, so the model cannot see the real ignore labels.\n\nIn the extreme cases, the Complementary Loss is still effective. If one image has only novel classes, the loss encourages the model to either predict high score for novel class or predict score lower than the ignore prior value. If one image has only base classes, the loss forces the model to either predict low score for novel class or predict score lower than the ignore prior value. The Complementary Loss provides valid information in both cases.\n\nActually, the difficulty of learning novel classes depends not only on the novel classes themselves but also depends on the transferability from base classes to novel classes. Therefore, some splits may happen to fall into easy cases, and thus we use more than one random split to verify the effectiveness of our method.\n\n---\n\n__Q5__. Quality of the figures is poor. Texts with light color in Figure 1(b) can not be seen clearly, and all figures would be blurred when zooming in.\n\n__A5__. Thanks for the suggestion. In Fig. 1 (b), considering that two images have duplicate ouputs, we use deeper color for mentioned variables while lighter color for unmentioned variables. We have revised Fig. 1 (b) and other figures for better quality. \n", " __Q1__. According to the details in Section 4.2, pseudo-labels of novel classes generated by CAM are produced in advance. These pseudo-labels are utilized in the training process. However, the authors do not mention this in the method section. I think this part is crucial and should be included.\n\n__A1__. As stated in Line 237-239, the re-trained version of our method uses the pseudo-labels of novel classes predicted by our model, instead of using CAM. As introduced in Line 233-236, the pseudo-labels generated by CAM is used for WSSS baselines and RETAB for training the segmentation model. As shown in Tab 1, even without re-training, our method can achieve satisfying performance and could be further improved after the commonly used re-training.\n\n---\n\n__Q2__. Line 162 - 164, 'Under ... produce a binary similarity with all the pixel embeddings.'. Please explain how the similarities are learned and related to the supervision.\n\n__A2__. Specifically, we compute the dot-production-sigmoid between each proposal embedding and each pixel embedding, where the dot-production corresponds to a similarity function. For each base class, we have its GT mask indicating whether the pixel belongs to the base class, and thus we have the supervision on the binary similarity between base proposal embedding and pixel embeddings. Specifically, the supervision is the mask loss in Eqn. 2 for similarity learning. \n\n---\n\n__Q3__. Is the SimNet trained in an end-to-end manner?\n\n__A3__. Yes, the whole framework in Fig. 2 is trained end-to-end (one-stage). \n\n---\n\n__Q4__. Line 189, what are the values of R_n ?\n\n__A4__. The value of R_n is the pixel-pixel similarity estimated by the SimNet, indicating whether the input pixel pair comes from the same class. \n\n---\n\n__Q5__. In Section 3.2, the notations should be specifically explained, e.g., i, w,h,c.\n\n__A5__. Thanks for the advice. Specifically, [H,W] is the spatial size of pixel embeddings and C is the channel number of feature. We use [h,w] to indicate the spatial index of pixel and c to indicate the class index in K classes totally.\n", " __Q1__. The paper is very hard to understand. It is unclear why the proposed method works well.\n\n__A1__. We have carefully revised the paper to make it more easily understood. \n\nWe build our method on MaskFormer, which is relatively harder to understand compared with typical segmentation model (e.g., FCN). The key insight of MaskFormer is to disentangle the semantic segmentation task into two sub-tasks: proposal classification and proposal segmentation, as introduced in Line 52-56. The first sub-task assigns class labels to proposal embeddings. The second sub-task calculates the similarity between each proposal embedding and all pixels to obtain a similarity map, which is the segmentation mask corresponding to the class of this proposal embedding.\n\nBased on the two sub-tasks of MaskFormer, our proposed method could be easily understood. For proposal classification, base classes and novel classes both have image-level labels for learning. For proposal segmentation, novel classes depend on the dual similarity transferred from base classes for learning. See A2/A3 below for further explanation. \n\n---\n\n__Q2__. The reviewer is not sure why the binary mask estimation model trained with base classes is applicable for novel class.\n\n__A2__. Following A1, it is applicable because the similarity (binary mask) is learned on base classes and could be transferred to novel classes. Specifically, base proposal embeddings and novel proposal embeddings are learned using image-level labels. The proposal-pixel similarity (dot-product-sigmoid) is learned using pixel-level labels of base classes. With novel proposal embeddings and proposal-pixel similarity, we can obtain the binary masks for novel classes by the same dot-product-sigmoid. The similarity is pairwise semantic similarity and thus transferrable across different categories, which has also been exploited before [6,29,4]. Our experiments (Tab. 2,3) also provide in-depth support for the transferability.\n\n---\n\n__Q3__. It is unclear how to train the model using base classes which have pixel-wise annotation and novel classes which have only image-level labels.\n\n__A3__. The key is that the two sub-tasks for segmentation are disentangled. We can supervise two sub-tasks for base classes and only one sub-task for novel classes. The different supervisions are applied according to the assigned classes respectively (see the class selection in equations and loss color in Fig. 2). For example, the mask loss is only applied to the binary masks of base classes and the classification loss is applied to both base classes and novel classes. \n\n---\n\n__Q4__. Only one baseline, RETAB, was used. No meaning to compare with WSSS in which no pixel-wise annotation was used at all.\n\n__A4__. The WSSS baselines we compared are not standard WSSS methods. As described in Line 231-235, we use pixel-wise annotations of base classes in the re-training stage. In other words, these WSSS methods use the same supervision as our method for a fair comparison. \n\n---\n\n__Q5__. Why sampling only 100 pixels (J=100) is enough ? It seems to be very small compared to the total pixels of an image.\n\n__A5__. On the one hand, the feature map has a relatively lower resolution (160x160) . On the other hand, the number of pixel pairs is 100x100, which is relatively enough for each training iteration. The ablation study in Tab. 1 in Supplementary indicates that using more pixels (e.g., 150) only slightly improves the performance. Our default value is a reasonable choice considering the computation consumption.\n\n---\n\n__Q6__. \\gamma was examined in the ablation studies. Where \\gamma was used? Eq.5 does not contain \\gamma.\n\n__A6__. As stated in Line 278-279, \\gamma represents the prior probability for the ignore class, similar to the background threshold used in weakly supervised semantic segmentation [17,33,35]. More detailed descriptions can be found in Line 207-210. \n", " This paper proposes a novel method on weak-shot semantic segmentation, in which fully-supervised model, MaskFormer, with dual similarity transfer was employed. The basic idea is totaly diffrent from the existing method of weak-shot segmentation, RETAB, which is based on the WSSS method. The experimental results showed the effectiveness of the proposed method. Pros) \n + New approarch for weak-shot segmentation task\n + The results by the proposed method outperformed the baseline, RETAB.\n\nCons)\n + The paper is very hard to understand. It is unclear why the proposed method works well.\n + Only one baseline, RETAB, was used. No meaning to compare with WSSS in which no pixel-wise annotation was used at all. 1) Why sampling only 100 pixels (J=100) is enough ? It seems to be very small compared to the total pixels of an image.\n2) \\gamma was examined in the ablation studies. Where \\gamma was used? Eq.5 does not contain \\gamma. \n3) The reviewer is not sure why the binary mask estimation model trained with base classes are applicable for novel class.\n4) It is unclear how to train the model using base classes which have pixel-wise annotation and novel classes which have only image-level labels. In Sup. Sec.8, the limitation is writen concretely. Since the limitation discussion is important, part of it should be included in the mail text. ", " This paper focuses on the weak-shot semantic segmentation problem which consists of two groups of categories, i.e., one with both annotated categories and masks, while another one with only labeled categories. For predicting precise segmentation masks, this paper proposes a dual similarity transfer network based on MaskFormer. Experiments are conducted on the COCO-Stuff-10K and ADE20K datasets to show the effectiveness. Strengths:\n+ This paper is well organized and easy to follow.\n+ This paper proposes to use a small network, i.e.,SimNet, to learn the pixel correlations between pixels within or across images with mask annotations, and further, SimNet is applied to estimate the pixel relations of images without mask annotations. Although this design introduces extra computations, this is effective in addressing the novel classes. \n+ The experiments show that the proposed method can successfully improve the segmentation accuracy.\n\nWeaknesses:\n- According to the details in Section 4.2, pseudo-labels of novel classes generated by CAM are produced in advance. These pseudo-labels are utilized in the training process. However, the authors do not mention this in the method section. I think this part is crucial and should be included. 1. Line 162 - 164, 'Under ... produce a binary similarity with all the pixel embeddings.'. Please explain how the similarities are learned and related to the supervision.\n2. Is the SimNet trained in an end-to-end manner?\n2. Line 187, what are the values of R_n ?\n3. In Section 3.2, the notations should be specifically explained, e.g., i, w,h,c\n Yes, limitations are adequately addressed.", " This paper proposes a dual similarity transfer framework based on MaskFormer for the Weak-shot Semantic Segmentation task. It consists of the proposal-pixel similarity transfer and the pixel-pixel similarity transfer. Experiments are performed on the COCO-Stuff and ADE20K datasets. \n**Strengths**\n1. The results look good, and evaluations are extensive.\n2. Considering the use of similarity transfer for segmentation makes sense to me. \n\n**Weaknesses**\n1. The presentation of this paper is poor.\n- In view of the contribution of this work, the merit of the proposed work is unclear. For instance, in Line 77, contribution 1: \"...(dual similarity transfer framework) is effective for proposal-pixel similarity transfer\". It is meaningless since the \"proposal-pixel similarity transfer\" is one of the components of the dual similarity transfer framework. \n\n- The method part is also hard to follow. It seems that this paper attempt to put the approach under the concept of similarity transfer [4]. However, the presentation of the method part mixes too many unrelated implementation details with the explanations of the design, making it hard to understand the reasonableness of the method. Based on my understanding, the proposed framework uses Maskformer as the base segmentation network, where the mask loss for novel classes is removed to fit the problem setting. It is a reasonable baseline, but I don't see a clear relation with the idea of \"similarity transfer.\" Could there be more explanation about why it is called proposal-pixel similarity transfer? \n Besides, symbols are not reflected in Figure 2, making it hard to correspond with the text description. And there are too many natural language descriptions in Section 3.4. I think a formal mathematical description is needed to help understand how the method works. \n In addition, it seems that the proposed approach explicitly utilizes the \"ignore labels\" for learning masks of novel categories in Pixel-Pixel Similarity and Complementary Loss. I wonder whether it would cause the mask information leakage of the novel classes? If the novel and base classes do not appear in the same image, would the Complementary Loss fail to work? From another view, the results are close to the FullyOracle result on some split. Is it because of the possible information leakage?\n\n- Quality of the figures is poor. Texts with light color in Figure 1(b) can not be seen clearly, and all figures would be blurred when zooming in. \n\n---\nEDIT \nAlthough this work is somewhat incremental, I believe it is a meaningful attempt in WSSS with moderate-to-high impact in this sub-area. Therefore, I have increased my score from 4 to 6. The presentation may be further improved to make it easier to understand. See questions in the Weaknesses part. Although the idea seems interesting, the presentation of this paper makes it boring to read, and it is hard to tell the approach's merit and reasonableness. \n Limitations are discussed in the paper. " ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 1, 4, 3 ]
[ "zhIzL9_aRYR", "Wbs17t_rXfQ", "-qBnk7OyXIt", "vkMi8Kpqdqr", "ZcVvYUbnWf", "mzgIQU22N2R", "Zu4-3OUyfgI", "nips_2022_qSYVigfakqS", "nips_2022_qSYVigfakqS", "nips_2022_qSYVigfakqS" ]
nips_2022_V03mpOjCwtg
Learning Generalizable Part-based Feature Representation for 3D Point Clouds
Deep networks on 3D point clouds have achieved remarkable success in 3D classification, while they are vulnerable to geometry variations caused by inconsistent data acquisition procedures. This results in a challenging 3D domain generalization (3DDG) problem, that is to generalize a model trained on source domain to an unseen target domain. Based on the observation that local geometric structures are more generalizable than the whole shape, we propose to reduce the geometry shift by a generalizable part-based feature representation and design a novel part-based domain generalization network (PDG) for 3D point cloud classification. Specifically, we build a part-template feature space shared by source and target domains. Shapes from distinct domains are first organized to part-level features and then represented by part-template features. The transformed part-level features, dubbed aligned part-based representations, are then aggregated by a part-based feature aggregation module. To improve the robustness of the part-based representations, we further propose a contrastive learning framework upon part-based shape representation. Experiments and ablation studies on 3DDA and 3DDG benchmarks justify the efficacy of the proposed approach for domain generalization, compared with the previous state-of-the-art methods. Our code will be available on http://github.com/weixmath/PDG.
Accept
The paper works on domain generalization of 3D point cloud classification, and proposes a part-based domain generalization network for the purpose, whose key idea is to build a common feature space of part template and align the part-level features wherein. Three reviewers appreciate the contributions, including the clear motivation, the implicit domain alignment by part-template features, and the proposed part feature aggregation module. They also suggest to improve the paper by clearer definitions of parts, better organization of contrastive learning in the paper, a more complete citation of closely related works, etc. After discussions between the authors and reviewers, consensus is reached on accepting the paper. Congratulations!
train
[ "FZiaT_ioHLe", "ad8ji4FIyl", "JGiZHjx1Mms", "JsjSVAqHA3T", "CDT2Z8UiCJW", "e6KpqvlMhOt", "4D6EGHpOK2", "MNnGj2yzNm", "yAxcxhoilM", "fWklBDO-Aiu", "sj3b80TdN__", "WYGlUXl_G6" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the suggestions. We will consider to clarify the role of shape-level contrastive learning part, compress or move details in sect.4 to supplementary. Since we are allowed to have one extra page if the paper is accepted, we will include all these discussions in the paper. The results and discussions of our method and compared methods with PointNet and DGCNN as the backbone on real-to-sim setting and the hardest variant of ScanObjectNN setting will be included.", " Thank you for the details in the rebuttal. In my understanding, the contrastive learning part seems optional. Would it be better to shift section 4 of the paper to supplementary, and use the space to present such extra results of real-to-sim, the hardest variant of ScanObjectNN? As without the contrastive learning loss the model can outperform previous works then I do not see a reason to have it here. \n\nFor the experiments in the rebuttal, please kindly provide DGCNN backbone in the revised version as well. \n\nI am generally fine with the responses and I'll increase my score accordingly. ", " We thank the reviewer for the comments and suggestions. We will revise our paper accordingly.\n\n**Q1: Definition of part-template and part-based features.**\n\nAs discussed in Sect.3.2, our part-template features are defined as a set of learnable ​$d$-dimension vectors ​${\\\\{H_i\\\\}}^{N_H} \\in \n R^{N_H \\times d}$ for encoding the local geometric priors, where $d$ is the dimension of point-wise features depends on backbone network and $N_H$ is set to 384 as presented in the section of implementation details. The part-level features are defined in Sect.3.1. Given a point cloud ​$P=\\\\{x_1,x_2, ...,x_N\\\\}\\in R^{N\\times3}$ and corresponding point-wise features ​$Z=\\\\{z_1, z_2, ..., z_N\\\\}\\in R^{N\\times d}$, we represent it as a union of ​$M$ overlapped parts ​$P=Q_1\\bigcup Q_2\\bigcup ...\\bigcup Q_M$, where each part ​$Q_i=\\\\{x_{i1}, ..., x_{ik}\\\\}\\in R^{k\\times 3}$ is defined as a center point ​$x_{i1}$ with its ​$k$ nearest neighbor points ​$\\\\{x_{i1}, ..., x_{ik}\\\\}$. $M$ center points are sampled by FPS for better coverage of the entire point cloud. The corresponding part-level feature ​$Z_{Q_i}\\in R^d$ of part ​$Q_i$ is derived by max-pooling on point-wise features of each points in part ​$Q_i$. ​$M$ and ​$k$ are set to 8 and 512 as represented in the section of implementation details. We will further clarify these definitions in the revised paper.\n\n**Q2: The balance of generalization ability of part-level features and final performance of PDG.**\n\nAs discussed in Sect.2, the balance of generalization ability and discrimination ability of part-level features depends on the part size. We explore how could point number per part influence the generalization ability of part-level features and the final performance of PDG. We conduct experiments on the task of **$S\\to S^{\\star}$**, where models are trained on ModelNet (**$M$**) and tested on ScanObjectNN (**$S^{\\star}$**). For all results in this response, we report the \"Last five\" results of each methods and \"Best\" results are also provided in corresponding brackets. The generalization ability of part-level feature is evaluated by average A-distance in all classes, which is a measure to evaluate distribution discrepancy as referred to in Sect.2.\n\nTable r2-1. Average A-distance in all classes and classification accuracy (in %) under task $S\\to S^{\\star}$ for PDG with different part sizes.\n\n| **Part-size** | **A-distance** | **$S\\to S^{\\star}$** |\n| --------------- | :----------------: | :--------------: |\n| 2048 (baseline) | 1.44 | 59.8 (61.5) |\n| 512 | 1.06 | **67.6 (69.4)** |\n| 256 | 0.90 | 65.4 (67.1) |\n| 128 | 0.89 | 62.4 (65.3) |\n| 64 | 0.88 | 63.1 (65.9) |\n| 32 | 0.80 | 63.4 (67.6) |\n| 16 | 0.72 | 64.4 (66.7) |\n| 8 | **0.68** | 64.1 (65.4) |\n| 1 | **0.68** | 62.3 (63.7) |\n\nAs shown in Table r2-1, the average A-distance drops quickly by 0.38 when part size decreases from 2048 to 512 and 0.16 when part size decreases from 512 to 256. If we further reduce the part size to 128, 64, 32, 16, and 8, the average A-distance drops slowly. When the part size reduces from 8 to 1, the average A-distance remains unchanged. The A-distance, as a measure of generalization ability of part-level features, reaches to limit when part size is 8.\n\nThe part size also influences the performance of PDG for domain genearlization task. The best classification accuracy is achieved when part size is 512. With smaller part sizes, the performance of PDG drops due to the decreased discrimination ability of part-level features. \n\nWe will revise 512-part-level and 256-part-level to 512-points-part-level and 256-points-part-level.", " **Q3: Ablation on shape-level contrastive learning loss.**\n\nIn this paper, we build a contrastive learning framework upon part-based shape representation to improve the robustness of the learned aligned part-based features. Specifically, part-level contrastive loss encourages learned part-based feature of a part of a shape under different local transformations to be consistent in the feature space in an unsupervised manner, while shape-level contrastive loss pushes global representation of shapes in the same class together. The shape-level contrastive loss could be used for the baseline method. For a fair comparison with the baseline method, we remove the shape-level contrastive learning loss in PDG, denoted as PDG (w/o SCL) and the results of four domain generalization tasks (**$M\\to S^{\\star}$**, **$M\\to S^{\\star}_B$**, **$ S\\to S^{\\star}$**, **$S\\to S^{\\star}_B$**) are presented in Table r2-2, where **$M$**, **$S$**, **$S^{\\star}$**, **$S^{\\star}_B$** represented ModelNet, ShapeNet, ScanObjetNN, ScanObjectNN with background respectively.\n\nTable r2-2. Ablation on shape-level contrastive loss.\n\n| **Method** | **$M\\to S^{\\star}$** | **$M\\to S^{\\star}_B$** | **$ S\\to S^{\\star}$** | **$S\\to S^{\\star}_B$** | **Average** |\n| :------------------ | :-------------: | :--------------: | :-------------: | :--------------: | :-------------: |\n| PointNet | 59.8 (61.5) | 51.5 (53.8) | 55.9 (57.4) | 51.0 (54.0) | 54.5 (56.7) |\n| MetaSets (PointNet) | 60.3 (66.3) | 52.4 (57.5) | 51.8 (55.3) | 44.3 (50.3) | 52.2 (57.0) |\n| PDG (PointNet) | **67.6 (69.4)** | **58.5 (61.1)** | **57.3 (61.8)** | **51.3 (55.5)** | **58.7 (62.0)** |\n| PDG (w/o SCL) | 67.1 (69.3) | 57.1 (60.0) | 57.1 (60.1) | 51.1 (54.8) | 58.1 (61.1) |\n\nAs shown in Table. r2-2, PDG (w/o SCL) performs slightly worse than PDG by **0.6%** in average accuracy, while still outperforms baseline and MetaSets by **3.6%** and **5.9%** respectively. These results demonstrate that the major performance gain of PDG is derived by the design of part-based feature representation.\n\n**Q4: Generalization to hardest version of ScanObjectNN.**\n\nWe evaluate PointNet, MetaSets (PointNet) and PDG (PointNet) on the tasks of $M\\to S^{\\star}_H$ and $S\\to S^{\\star}_H$ , where models are trained on ModelNet (**$M$**) and ShapeNet (**$S$**) respectively, then tested on the hardest version of ScanObjectNN (**$S^{\\star}_H$**). Results are in Table R2-3. \n\nTable r2-3. Classification results (in %) of PointNet, MetaSets (PointNet) and PDG (PointNet) on the task of $M\\to S^{\\star}_H$ and $S\\to S^{\\star}_H$. \n\n| **Method** | **$M\\to S^{\\star}_H$** | **$S\\to S^{\\star}_H$** |\n| ------------------- | :----------------: | :----------------: |\n| PointNet | 50.0 (51.9) | 49.0 (50.9) |\n| MetaSets (PointNet) | 47.4 (54.4) | 45.6 (50.0) |\n| PDG (PointNet) | **56.6 (57.2)** | **51.3 (53.9)** |\n\nIn Table r2-3, it can be observed that PDG (PointNet) outperforms both PointNet and MetaSets (PointNet) in two tasks, which demonstrates that part-based feature representation learned by PDG are more generalizable to the shapes under large perturbations. We will include these results in the main paper.", " **Q5: Real-to-synthetic setting.**\n\nWe evaluate PointNet, MetaSets (PointNet) and PDG (PointNet) on the tasks of $S^{\\star}\\to M$ and $S^{\\star}\\to S$, where models are trained on real scan dataset ScanObjectNN (**$S^{\\star}$**) and test on synthetic datasets ModelNet (**$M$**) and ShapeNet (**$S$**). Results are shown in Table R2-4. As shown in Table r2-4, both MetaSets and PDG improve baseline in two real-to-synthetic settings.\n\nTable r2-4. Classification results (in %) of PointNet, MetaSets (PointNet) and PDG (PointNet) on the tasks of $S^{\\star}\\to M$ and $S^{\\star}\\to S$.\n\n| **Method** | **$S^{\\star}\\to M$** | **$S^{\\star}\\to S$** |\n| ------------------- | :--------------: | :--------------: |\n| PointNet | 63.7 (71.0) | 74.7 (80.2) |\n| MetaSets (PointNet) | 64.3 (71.9) | **77.0** (81.2) |\n| PDG (PointNet) | **66.7 (72.5)** | 76.1 (**81.4**) |\n\nWe also conduct experiments on PointDA-10 dataset which consists of three domains, i.e., synthetic dataset ModelNet-10 (**$M$**), synthetic dataset ShapeNet-10 (**$S$**), and real scan dataset ScanNet-10. Six point cloud domain generalization tasks are built, including $M\\to S$, $M\\to S^{\\star}$, $S\\to M$, $S\\to S^{\\star}$, $S^{\\star}\\to M$, $S^{\\star}\\to S$. We use DGCNN as the backbone like previous methods and the same training setting as in 3DDG benchmark. The results are shown in Table r3-1. For MetaSets and PDG, we report the \"Last five\" results. These tasks include all three settings, i.e., real-to-synthetic, synthetic-to-real, and synthetic-to-synthetic. \n\nTable r2-5. Classification accuracy (in %) of various 3DDA and 3DDG methods on PointDA-10 dataset. Results of methods which do not use target domain data during training are in bold. Note that the results of DA and DG methods cannot be fairly compared because DA methods use target domain data in training.\n\n| **Method** | **DA/DG** | **$M\\to S$** | **$M\\to S^{\\star}$** | **$S\\to M$** | **$S\\to S^{\\star}$** | **$S^{\\star}\\to M$** | **$S^{\\star}\\to S$** | **Average** |\n| :------------------- | :-------: | :----------: | :------------: | :----------: | :------------: | :------------: | :------------: | :---------: |\n| Supervised | - | 93.9 | 78.4 | 96.2 | 78.4 | 96.2 | 93.9 | 89.5 |\n| **w / o Adapt** | **-** | **83.3** | **43.8** | **75.5** | **42.5** | **63.8** | **64.2** | **62.2** |\n| DANN [R1] | DA | 74.8 | 42.1 | 57.5 | 50.9 | 43.7 | 71.6 | 56.8 |\n| PointDAN [38] | DA | 83.9 | 44.8 | 63.3 | 45.7 | 43.6 | 56.4 | 56.3 |\n| RS [R2] | DA | 79.9 | 46.7 | 75.2 | 51.4 | 71.8 | 71.2 | 66.0 |\n| DefRec + PCM [39] | DA | 81.7 | 51.8 | 78.6 | 54.5 | 73.7 | 71.1 | 68.6 |\n| GAST [40] | DA | 84.8 | 59.8 | 80.8 | 56.7 | 81.1 | 74.9 | 73.0 |\n| **MetaSets [22]** | **DG** | **86.0** | **52.3** | **67.3** | **42.1** | **69.8** | **69.5** | **64.5** |\n| **PDG (Ours)** | **DG** | **85.6** | **57.9** | **73.1** | **50.0** | **70.3** | **66.3** | **67.2** |\n\n\n\nIn Table r2-5, PDG improves baseline by **6.5%** and **2.1%** in two real-to-synthetic tasks, i.e., $S^{\\star}\\to M$ and $S^{\\star}\\to S$. Compared with 3DDG method MetaSets, PDG performs better in the $S^{\\star}\\to M$ task and worse in $S^{\\star}\\to S$ task. Considering the average performance in all tasks, PDG outperforms baseline method by **5.0%** and MetaSets by **2.7%**. It is noticeable that PDG even exceeds some 3DDA methods including DANN [R1], PointDAN [38], and RS [R2]. These results demonstrate that PDG could also solve the real-to-synthetic point cloud generalization problem.", " **Q6: Related works on scan-to-CAD shape retrieval.**\n\nWe have conducted several real-to-synthetic DG classification tasks and show the effectiveness of our approach, as shown in Q5. The rgb-d real scan to CAD retrieval tasks [R3, R4, R5] aims to retrieve a similar CAD model to a given query real scan 3D object. It relies on generalizable feature representation and the similarity measure of real scan objects and CAD objects. We will properly cite and discuss these works in the section on related work. Though our experiments mainly work on domain generalization classification tasks, our part-based feature representation can be possibly extended to the rgb-d real scan to CAD retrieval task, deserving us to try in future work. This can be possibly achieved as follows. Given query shape and candidate shape, they can be represented by two sets of part-based features and the similarity can be calculated by a set-to-set measure. Considering the situation that scanned shapes often suffer from object partiality and different deformation variants. Part-based feature representations are suitable for cross-domain 3D shape retrieval. We will include this future work in the conclusion section.\n\n[R1] Ganin, Yaroslav, et al. Domain-adversarial training of neural networks. JMLR, 2016.\n\n[R2] Jonathan Sauder and Bjarne Sievers. Self-supervised deep learning on point clouds by reconstructing space. In Neurips, 2019.\n\n[R3] Hua, Binh Son, et al. SHREC'17: RGB-D to CAD retrieval with ObjectNN dataset. \n\n[R4] Pham, Quang-Hieu, et al. SHREC’18: Rgb-d object-to-cad retrieval.\n\n[R5] Dahnert, Manuel, et al. Joint embedding of 3d scan and cad objects. In ICCV, 2019.", " We thank the reviewer for the comments and suggestions. We will revise our paper accordingly.\n\n**Q1: Results on PointDA-10 dataset.**\n\nFor a fair comparison with current 3DDG method (MetaSets), we only conduct experiments on the dataset provided by them. We evaluate our method on PointDA-10 dataset [38] and compare it with current 3D domain adaptation methods which use target domain data in training procedure (denoted as DA in Table r3-1) and 3DDG methods where target domain data is inaccessible during training (denoted as DG). PointDA-10 dataset consists of three domains, i.e., synthetic dataset ModelNet-10 (**$M$**), synthetic dataset ShapeNet-10 (**$S$**), and real scan dataset ScanNet-10 (**$S^{\\star}$**). Six point cloud domain generalization tasks are built, including $M\\to S$, $M\\to S^{\\star}$, $S\\to M$, $S\\to S^{\\star}$, $S^{\\star}\\to M$, $S^{\\star}\\to S$. These tasks include all three settings, i.e., real-to-synthetic, synthetic-to-real, and synthetic-to-synthetic. We use DGCNN as the backbone like previous methods and the same training setting as in 3DDG benchmark. The results are shown in Table r3-1. For MetaSets and PDG, we report the \"Last five\" results. \n\nTable r3-1. Classification accuracy (in %) of various 3DDA and 3DDG methods on PointDA-10 dataset. Results of methods which do not use target data during training are in bold. Note that the results of DA and DG methods cannot be fairly compared because DA methods use target domain data in training.\n\n| **Method** | **DA/DG** | **$M\\to S$** | **$M\\to S^{\\star}$** | **$S\\to M$** | **$S\\to S^{\\star}$** | **$S^{\\star}\\to M$** | **$S^{\\star}\\to S$** | **Average** |\n| :------------------- | :-------: | :----------: | :------------: | :----------: | :------------: | :------------: | :------------: | :---------: |\n| Supervised | - | 93.9 | 78.4 | 96.2 | 78.4 | 96.2 | 93.9 | 89.5 |\n| **w / o Adapt** | **-** | **83.3** | **43.8** | **75.5** | **42.5** | **63.8** | **64.2** | **62.2** |\n| DANN [R1] | DA | 74.8 | 42.1 | 57.5 | 50.9 | 43.7 | 71.6 | 56.8 |\n| PointDAN [38] | DA | 83.9 | 44.8 | 63.3 | 45.7 | 43.6 | 56.4 | 56.3 |\n| RS [R2] | DA | 79.9 | 46.7 | 75.2 | 51.4 | 71.8 | 71.2 | 66.0 |\n| DefRec + PCM [39] | DA | 81.7 | 51.8 | 78.6 | 54.5 | 73.7 | 71.1 | 68.6 |\n| GAST [40] | DA | 84.8 | 59.8 | 80.8 | 56.7 | 81.1 | 74.9 | 73.0 |\n| **MetaSets [22]** | **DG** | **86.0** | **52.3** | **67.3** | **42.1** | **69.8** | **69.5** | **64.5** |\n| **PDG (Ours)** | **DG** | **85.6** | **57.9** | **73.1** | **50.0** | **70.3** | **66.3** | **67.2** |\n\nFrom Table r3-1, PDG improves the baseline methods by **5.0%** in the average accuracy of all tasks and outperforms 3DDG method Metasets by **2.7%**. Specifically, PDG beats MetaSets in four tasks $M\\to S^{\\star}$, $S\\to M$, $S\\to S^{\\star}$, and $S^{\\star}\\to M$. For two synthetic-to-real tasks $M\\to S^{\\star}$ and $S\\to S^{\\star}​$, the improvements are **5.6%** and **7.9%**. We can also find that PDG exceeds some 3DDA methods including DANN [R1], PointDAN [38], and RS [R2]. We will properly cite more 3D domain adaptation works and include these results in the final version.\n\n**Q2: Results of MetaSets.**\n\nFor all experiments, PDG and MetaSets use same datasets, data preparation and experiment setting. For all experiments with PointNet as backbone, we report their results by directly running their published code. They report 68.28%, 57.19%, 55.25%, and 49.50% in the four tasks, while we get 66.3%, 57.5%, 55.3%, and 50.3% as \"Best\" for MetaSets. These results are similar to those reported in their paper. As for the experiments with DGCNN as the backbone, the corresponding codes of MetaSets using DGCNN backbone is not provided. For fair comparison in the experiment settings, we use exactly the same implementation of DGCNN as backbones in our method and MetaSets by replacing the backbone by the same DGCNN in their source codes. The results of MetaSets with DGCNN backbone are reported in the above way which exactly matches our method in dataset, backbone, data augmentations, etc. ", " **Q3: More related works on 3D domain adaptation.**\n\nThanks for this suggestion. How to represent 3D point clouds in different domains is an old but unsolved problem, which has drawn more attention recently. [R3] is one of the first methods to explore the 3D domain adaptation problem, which leverages synthetic scans of 3D scenes from Google 3D Warehouse to train an object detection system for 3D point clouds real-scan data. [R4] and [R5] applied hough transform to the problem of robust 3D shape feature learning and evaluated on point cloud classification and retrieval tasks of 3D domain generalization problem. [R6] factored low-dimensional deformations and pose variations of the 3d shapes and recognized them in the scanned cluttered indoor scene, which is also a 3D domain generalization problem. [R7] focused on the CAD-to-scan retrieval task of the 3D domain generalization problem and introduced a method called CAD-Deform to obtain accurate CAD-to-scan fits by non-rigidly deforming retrieved CAD models. [R8] and [R9] only concentrated on single domain problems. Compared with them, our proposed PDG method works on 3D domain generalization problem, and is built based on the proposed part-based deep feature learning approach. As suggested, we will cite and include these related works in sections of introduction / related works.\n\n[R1] Ganin, Yaroslav, et al. Domain-adversarial training of neural networks. JMLR, 2016.\n\n[R2] Jonathan Sauder and Bjarne Sievers. Self-supervised deep learning on point clouds by reconstructing space. In Neurips, 2019.\n\n[R3] Lai, Kevin, and Dieter Fox. Object recognition in 3D point clouds using web data and domain adaptation. In IJRR, 2010.\n\n[R4] Knopp, Jan, et al. Hough transform and 3D SURF for robust three dimensional classification. In ECCV, 2010.\n\n[R5] Woodford, Oliver J., et al. Demisting the Hough transform for 3D shape recognition and registration. IJCV, 2013. \n\n[R6] Kim, Young Min, et al. Acquiring 3d indoor environments with variability and repetition. ACM ToG, 2012. \n\n[R7] Ishimtsev, Vladislav, et al. Cad-deform: Deformable fitting of cad models to 3d scans. In ECCV, 2020. \n\n[R8] Golovinskiy, Aleksey, et al. Shape-based recognition of 3D point clouds in urban environments. In ICCV, 2009.\n\n[R9] Qi, Charles R., et al. Deep hough voting for 3d object detection in point clouds. In ICCV, 2019.\n\n", " We thank the reviewer for the comments and suggestions. We will revise our paper accordingly.\n\n**Q1: Relationship between part-based domain generalization network and general point cloud models.**\n\nExploiting local patterns has been proven to be important for the success of both CNN and point cloud convolutional networks. They progressively capture features at an increasingly larger receptive field. The ability to abstract local patterns guarantees the generalizability to unseen cases in a single domain. As discussed in Sect.3, we find that these local geometric structures are also shared across different shapes in distinct domains, while they are short of semantic information for classification. The balance of generalization ability and discrimination ability relies on the receptive field of the local pattern. Our part-level features could be regarded as the local features with a balanced receptive field. \n\nFor general point cloud models, the local features are directly aggregated by a pooling operation. Compared with them, part-level features in PDG are aligned to the part-template features, resulting in part-based feature representations with better generalization ability. A part-based feature aggregation module finally aggregates these part-based feature representations to a global representation for each point cloud. \n\n**Q2: Adopting part-based feature aggregation module and part-based feature representation to general point cloud models.**\n\nIt is an interesting idea to adopt the part-based feature aggregation module and part-based feature representation to general point cloud models. We conduct two experiments for the point cloud classification task on ModelNet40.\n\n1. We use PointNet++ as the backbone and replace the final max-pooling layer with a part-based feature aggregation module, which improves the classification accuracy from **91.17%** to **91.98%**.\n\n2. We also adopt part-based feature representation to the general single domain point cloud classification model. PointNet++ could not provide the point-wise features, so we use PointNet as the backbone and train PDG (PointNet) on ModelNet40. PDG (Point) improves the classification results from **90.19%** to **90.69%** compared with PointNet.\n\nThese improvements demonstrate that our proposed part-based feature aggregation module and part-based feature representation are possible to be combined with general point cloud models and further improve their performance for train / test data in the same domain. \n\n**Q3: Related works on dictionary learning in cross-domain problem.**\n\nThanks for this suggestion. In CDG-UDA [R1], they propose a category dictionary guided unsupervised domain adaptation model for the cross-domain object detection problem. Category-specific dictionaries are learned from the source domain to represent the candidate boxes in the target domain. \n\nIn our work, for point cloud domain generalization problem, we find that local geometric structures encoded by part-template features are shared across different domains, which inspires us to align part-level features in different domains to the part-template features. The part-template features served as a dictionary of local geometric structures of 3D shapes.\n\nWe will cite these related works in our paper.\n\n[R1] Li, Shuai, et al. Category dictionary guided unsupervised domain adaptation for object detection. In AAAI, 2021.\n", " The authors propose to utilize the part-level feature representation for the cross-domain generalization problem in point cloud applications. Given part-level features grouped from point-wise representations, the authors first align them to a learned feature dictionary via cross-attention, and then aligned-features are aggreged with a part-weighted maxpooling strategy. In addition, contrastive learning is conducted in both shape-level and part-level. Empirical results on standard DG benchmark datasets are presented for validation. Strengths:\n1. The method is well motivated. The authors find that part-level features present smaller distribution divergence than shape-level feature in the cross-domain tasks. Therefore, they propose to adopt part level features in the DG tasks.\n\n2. Some interesting components are proposed and well justified. The proposed part-template features implicitly achieves the domain alignment by aligning both domains to the learned feature dictionary. The proposed part feature aggregation module outperfoms the popular max pooling module. \n\n3. The proposed method achieves state-of-the-art performance on DG benchmarks. \n\nWeaknesses:\n1. I am wondering the relationship between the proposed part feature based DG method and the general point cloud models (e.g., PointNet++) that utilize part/local features. For example, in the PointNet++, the part level feature is extracted and aggregated hierarchically, which is quite similar to the strategy adopted in this paper. Could you clarify it? \n\n2. Based on the first question, could the proposed module be adopted in general point cloud models? For example, could we replace the last max pooling layer of PointNet++ with the part feature aggregation module proposed in this paper? \n\n3. As for the proposed techniques, the contrastive loss is widely adopted as the learning regularization and utilizing part level features is also a common practice. In my opinion, the main contribution is the implicit domain alignment with the learned feature dictionary and the part feature aggregation module. So I suggest that the authors include more related work on the application of dictionary learning in cross-domain problems, such as [1]\n\n[1] Li, Shuai, et al. \"Category dictionary guided unsupervised domain adaptation for object detection.\" Proceedings of the AAAI conference on artificial intelligence. Vol. 35. No. 3. 2021 See weaknesses. Not found.", " The authors present a new method for generalizing point cloud classification from synthetic to real data. The authors argue that the local geometric features are more generalizable than the whole shape. They focus on part-based feature representation, and design a part-based domain generalization network for the point cloud classification task. The key idea is to build a common feature space using a part template, and then align the part-level features of the source and the target domain to this template. The authors demonstrate that the proposed method achieves state-of-the-art performance on the 3DDG benchmark. - I like the idea to align local geometric features to solve domain generalization on point clouds. This idea is novel and significant. The technical approach to implement this idea is sound, and the experimental results demonstrate good performance.\n\n- I also like the idea to verify the hypothesis that local geometric features are more generalizable than global features in Fig. 2. However, I would like to point out a few issues here. \n (1) It is true that in general reducing the part size leads to better generalization. But where is the limit? At the very least each part can be reduced to a point, but I do not believe that point-based features are the most generalizable. It could be more interesting to identify by how many points per part we would reach to limit of generalization here. \n (2) 512-part-level and 256-part-level mean 512 and 256 points per part, respectively I guess. This sounds confusing as I can also think of it as 512 parts and 256 parts. It is better to revise this wording, like 512-points-per-part and 256-points-per-part. \n\n- I also value the clarity of the writing, which is very nice and easy to read. \n\n- Despite its great values, the paper suffers from the following issues. \n (1) In terms of technical approach, the contrastive learning part is less well connected to the part-based features for domain generalization. For example, if the authors wish to use contrastive learning, at least shape-level contrastive loss should be used for the baseline methods as well. Or the comparisons should be separated with a table with no contrastive learning utilized. In Table 1, as I understand, the baselines are without contrastive loss but the PDG is with contrastive loss. Please correct me if I misunderstood. \n (2) My second concern is that the experiments conducted are somewhat simplistic. I expect deeper analysis and more experimental settings to be done. Please see my comments in the question section. (1) For completeness, I expect to find some information about how the part template is defined. However, it seems not provided except that in Line 246 the authors mentioned that they followed the experiment setting in MetaSets [22]. Could you clarify how the parts are defined? How does this affect the final performance as the proposed method is largely based on part features? \n\n(2) While the benchmarks are built upon ModelNet/ShapeNet and ScanObjectNN, I see that not all data are utilized. For example, the experiments are only done on two basic variants of ScanObjectNN including OBJ-ONLY and OBJ-BG. How does the transfer work if the target is the hardest variant of the ScanObjectNN? \n\n(3) As a domain generalization problem, it seems insufficient to conduct only sim-to-real transfer experiment. How about real-to-sim, i.e., training on ScanObjectNN and testing on ModelNet/ShapeNet? In practice, this could be used to build applications such as retrieving a CAD model from a given scan. As all data is available, I wish to further understand what challenges remain that make this experiment not conducted in both MetaSets and this paper. \n I think the related work section in this paper is quite short and needs some revisions. First, while not exactly the same, I found in the literature there are some 3D tasks that link different domains together such as scans and CAD object retrieval. I think this is worth some further discussions about the connections of these specific tasks with the domain generalization problem presented in this paper. \n[A] SHREC'17: RGB-D to CAD retrieval with ObjectNN dataset, 2017, 2018.", " The presented method detects “global” features (PointNet or DGCNN) locally on sampled points. Then learn relations between those local representations as part-level aggregation. The performance is further improved by contrastive learning. \n\nAuthors evaluate the approach on several cross domain datasets where the method is learnt on one domain and tested on another one. The target (test) domain is inaccessible during training.\n\nDomain adaptation is an important and well old problem for 3d point cloud processing.\n Pros:\n-Clear motivation, I like the motivation by features distance between domains (Fig. 2)\n\n\n-While the idea of learning part-based models from local features is old and highly researched, the presented method on 3D point clouds with Neural Networks focused for the domain adaptations seems novel.\n\n-The problem of domain adaptation of 3d point cloud processing when the target domain is unavailable during training is a very important and unsolved problem.\n\n-Most of questions I had during reading were further answered\n\n\nCons:\n-The approach is motivated from many previous works that focus on domain adaptation for 3d point cloud that are not cited, though the approach is novel by using it in neural networks.\n\n-Authors did not evaluate on some datasets pairs (training-testing) that will allow much broader comparison. That raises several questions why. I believe it will be also good to report why presented numbers of other methods differ from original papers.\n -Why did authors not report M->S like a comparison to other methods? It would be a very nice comparison to other papers [Achituve21, Shen22,..]. It looks strange that evaluation is done with datasets [11],[12],[14], but popular [13] (ScanNet) is missing, ScanNet is often used in previous methods [Achituve21, Shen22,..]. Readers can think about many reasons why authors didn't report it and it looks strange that authors perform too many results, but the most important settings are somehow missing.\n\n-Authors say that they executed the competitors' code, still, they have different performance values than in other papers (in other papers they also vary from paper to paper). E.g. M->S*, MetaSet paper claims they have 72% (so better than the presented method), but authors report ~66% as best for Metaset. Is it a problem of the field that once you download a repository of a paper you obtain different results that were originally published? The difference is higher than the paper’s improvement :( \n\n\n-If one looks on the proposed method as using \"global\" features (e.g. PointNet) locally on parts of the object and then learning relations of those local responses, then this is an old method how to represent 3d point cloud object in the scene between different domains, for example [Woodford13, Knopp10] learns spatial configuration of such features for the nice 3d shapes and apply it on the 3d scans, [Kim12] learns parts deformation of the 3d shapes and recognize them in the scanned cluttered scene. [Lai10] Looks like one of the first method to do 3d point domain adaption. Here is list of related missing work:\n[Lai10] Lai and Fox, Object Recognition in 3D Point Clouds Using Web Data and Domain Adaptation, IJRR 2010\n[Ishimtse20] Ishimtse at all, CAD-Deform: Deformable Fitting of CAD Models to 3D Scans, ECCV 2020\n[Qi19] Qi et all, Deep Hough Voting for 3D Object Detection in Point Clouds, ICCV 2019\n[Golovinskiy09] Golovinskiy et al, Shape-based Recognition of 3D Point Clouds in Urban Environments, ICCV'09\n[Knopp10] Knopp et all; Hough Transform and 3D SURF for Robust Three Dimensional Classification, ECCV 2010\n[Woodford13] Woodford et all, Demisting the Hough Transform for 3D Shape Recognition and Registration, IJSV 2013\n[Kim12] Y.M. Kim et al., Acquiring 3D Indoor Environments with Variability and Repetition, ACM ToG, SIGGRAPH Asia 2012\n It looks there is no potential for negative social impact of the work.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "ad8ji4FIyl", "JsjSVAqHA3T", "sj3b80TdN__", "sj3b80TdN__", "sj3b80TdN__", "sj3b80TdN__", "WYGlUXl_G6", "WYGlUXl_G6", "fWklBDO-Aiu", "nips_2022_V03mpOjCwtg", "nips_2022_V03mpOjCwtg", "nips_2022_V03mpOjCwtg" ]
nips_2022_Bv8GV6d76Sy
Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks
Monte Carlo (MC) integration is the _de facto_ method for approximating the predictive distribution of Bayesian neural networks (BNNs). But, even with many MC samples, Gaussian-based BNNs could still yield bad predictive performance due to the posterior approximation's error. Meanwhile, alternatives to MC integration are expensive. In this work, we experimentally show that the key to good MC-approximated predictive distributions is the quality of the approximate posterior itself. However, previous methods for obtaining accurate posterior approximations are expensive and non-trivial to implement. We, therefore, propose to refine Gaussian approximate posteriors with normalizing flows. When applied to last-layer BNNs, it yields a simple, cost-efficient, _post hoc_ method for improving pre-existing parametric approximations. We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.
Accept
The paper proposes a method to refine Gaussian approximations of the posterior in Bayesian computations by using the normalizing flow. Such Gaussian approximations are usually cheap to obtain, via Laplace approximation or variational Bayes. The method proposed by the authors outperform standard MC approaches and is competitive with the most sophisticated ones (Hamiltonian MC), while cheaper. The reviewers praised the experimental results. They also liked the nice explanations and illustrations of the failure of the standard MC approaches. Some remarks about the limited novelty of this discussion with respect to existing works (e.g. Wilson and Izmailov, 2020) were satisfactorily addressed by the authors during the discussion with the reviewers. Overall, the reviewers agreed that, while the writing of the paper could be improved in parts, the discussion and the method proposed in this paper are a nice contribution to Bayesian learning, and will be useful to the community. I will therefore recommend to accept the paper. I encourage the authors to take into account the comments of the reviewers (especially Rev. 5mzJ) when preparing the camera-ready version of the paper.
train
[ "3LppUYAZrV", "yRwDKqhO07H", "J9rTh-7RMPC", "DTaUmuV7yThw", "MLLs9WBeOX", "keY60D4GAw6", "BM35OJVKxS0", "GHfRnt_jHE0", "dFQ6I689GQw", "xFkngTNg2od", "Ub_jHE6u0km" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers for their comments and proposals to further clarify our work. We hope that all concerns were sufficiently addressed by our replies. If you feel like your concerns and questions were not addressed to your satisfaction, we would highly appreciate a follow-up comment. Since the discussion period ends soon, we would like to clarify any remaining issues as soon as possible. Thanks again for all your work!", " Thanks a lot for your further comments! We hope to clarify a couple of points:\n\n> The results on the number of MC samples required / linearization do not directly connect to the method directly.\n\nThey are connected to the proposed refinement method directly: We showed that MC-predictive is desirable since it has a theoretical guarantee and its popular alternative, linearization, is not generally applicable and introduces biases. But, MC-predictive can be expensive due to the number of samples needed. We showed that the key to being efficient in obtaining MC-predictive is by having a fine-grained posterior, such as HMC. But, HMC is expensive and thus we propose the refinement framework, which is much more cost-efficient in obtaining accurate weight-space posteriors.\n\n> I don't think \"our method should perform better due to the fact that it is a principled Bayesian method\" is a sufficient justification, and I think explicit comparisons should be included\n\nThe quoted statement refers specifically to the finding in [1], esp. in their Fig. 1 and Fig. 4, where temperature scaling does not yield good calibration outside of the data region, unlike Bayesian methods. For explicit comparisons re. in-distribution calibration, please refer to [2] (Fig. 6 and Tab. 1)---Bayesian methods are better calibrated than temperature scaling in standard benchmark datasets. We will include similar explicit comparisons and discussions in the next revision. We will also include all-layer baselines, similar to [3] (Fig. 4).\n\n\n\n**References**\n\n[1] Kristiadi et al., Being Bayesian even just a bit, fixes overconfidence in ReLU networks, ICML 2020 \n\n[2] Kristiadi et al., An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence, NeurIPS 2021\n\n[3] Daxberger et al., Laplace Redux – Effortless Bayesian Deep Learning, NeurIPS 2021", " Dear authors, thank you for the rebuttal! \n\nI understand the story of the paper as you described it. It makes sense, but also I still think that it is not particularly focused. The results on the number of MC samples required / linearization do not directly connect to the method directly. Regarding the empirical results, it sounds like the main hope is that the method will provide high-quality uncertainty estimates. I think this statement should be tested more carefully, with comparisons to targeted methods beyond last-layer BNNs. I don't think\n> our method should perform better due to the fact that it is a principled Bayesian method\n\nis a sufficient justification, and I think explicit comparisons should be included.\n\nThat being said, I am still leaning towards acceptance, so I maintain my score of 5 for now.", " Thanks for your review! We hope that our response below can convince you further. If you have any additional doubts, please let us know!\n\n- **Figure 5?** The MPA only uses the diagonal elements of the covariance over the network outputs, among other approximations, which makes it underconfident. We have added the MPA’s derivation in Appendix A—which makes our point more formal— and made Fig. 5 clearer by changing the colormap.\n- **Fig. 4c?** We don’t claim that linearization induces underconfident predictions per se: rather, we show that linearization, in general, induces biases compared to the ground-truth MC-integrated predictive (in our case, with $10000$ samples).\n- **Different flows for refinement?** Yes, we tried a different flow: in Fig. 6(c), we use the planar instead of radial flow. In any case, the radial flow is mainly used in the paper since it is among the simplest normalizing flows available—for a space of dimension $D$, it only has $D+2$ parameters on each layer (see Rezende & Mohamed, 2015, Sec. 4.1). Even then, we can already show that refinement is effective, even with short flow length (see Fig. 7). While more sophisticated, more expressive flows might improve this further, they come with higher computational costs and is harder to optimize/need to be tuned more. \n- **Elaborate more the cost limitation?** Please find below (Table 1) the theoretical costs of our method, in comparison to other Bayesian methods. Additionally, please refer to Fig. 5 of Daxberger et al., _Laplace redux—effortless Bayesian deep learning_, NeurIPS 2021 in conjunction to our Fig. 7 (bottom) for the concrete, practical values. Please refer also to the last point in our response to **_Reviewer 5mzJ_**.\n\nTable 1: Complexities of on top of MAP in O-notation. Radial flow is assumed for “Refine”.\n\n| | Inference | Prediction (Computation) | Prediction (Memory) |\n|---|---|---|---|\n| MAP | - | M | M |\n| LLLA | NM + C^3 + P^3 | SM | M + C^2 + P^2 |\n| LLLA+Refine | NM + C^3 + P^3 + NFCP | SFCP | FCP |\n| SWAG | RNM | SRM | RM |\n| DE | - | KM | KM |\n| MultiSWAG | KRNM | KSRM | KRM |\n\nM=nr, of parameters of the DNN, N=nr. of training data points, C=nr. of classes, P=nr. of last-layer features, F=normalizing flow length, S=nr. of MC samples for predictive distribution, R=nr. of model snapshots, K=nr. ensemble components.\n\n- **Additional task (GRU + 20Newsgroups)?** Please find below preliminary results on a text classification task with a GRU network on the 20NG dataset. We will update the appendix in the next revision, including also further datasets (SST, TREC).\n\n| Method | MMD | Acc. | NLL |\n|-----------|-------|------|--------|\n| MAP | 0.834 | 72.8 | 1.3038 |\n| LA | 0.808 | 72.7 | 1.2772 |\n| LA+Refine | 0.643 | 72.1 | 0.9904 |\n| HMC | 0 | 72.6 | 0.9838 |\n", " Thanks for your review! We hope that our response below can make you even more confident about our work. We will be very happy to discuss any follow-up questions you might have.\n\n- **Significance of the insights vis-à-vis e.g. Wilson & Izmailov?** Our contribution in Sec. 3 is: We study the effect of weight-space approximation quality against the number of samples needed to obtain good predictive distributions. (See also our responses to **_Reviewer qker_** and **_Reviewer 5mzJ_**.) While our results also reaffirm the widely held belief that single-basin approximations are inferior to more fine-grained counterparts, our conclusion differs from that of Wilson & Izmailov: They argue in their Sec. 3.2 that “[...] Ultimately, the goal is to accurately compute the predictive distribution, rather than find a generally accurate representation of the posterior. [...]”. However, as we have shown in our work, finding an accurate weight-space posterior approximation is still a worthy goal. Note that we do so using only a single network, unlike deep ensemble or MultiSWAG, and thus our method is much cheaper.\n- **The refinement method, in comparison to Daxberger et al. [6]?** The proposed refinement method is generally applicable and not tied to specific posterior approximations (e.g. Laplace): The only requirement of our method is a parametric approximation to the posterior (e.g. Laplace, VB—not necessarily has to be Gaussian). However, when applied to Laplace, our method becomes even more compelling in practice due to the fact that Laplace is _post-hoc_ and thus the resulting refined-Laplace method is cheap.\n- **What helps to understand last-layer approximations?** We believe last-layer approximations are natural if one excludes full-layer ones. This is due to the role of the last layer which directly maps the feature space to the output space of a NN. So, given fixed feature vectors (obtained by freezing the previous layers and feedforward the training examples through them), one has a simple linear model which is easy to work with. As a big bonus, last-layer approximations are generally cheap since the dimensionality of the last layer of a NN tends to be smaller than other layers’. In terms of its performance, a straightforward experiment to validate the choice of last-layer approximations is by simply doing a Laplace approximation/variational inference on each layer separately and comparing the resulting performance on e.g. calibration. This can be done easily using recent libraries for LA/VI. However, while interesting, we believe that this experiment answers a problem that lies outside our paper’s specific topic. \n- **Is this a way to calibrate NN automatically?** Our goal is to match the performance given by the gold-standard HMC, cheaply. It is true that HMC often yields better-calibrated predictive distributions, and thus, by extension, the proposed refinement method can also make standard parametric approximations better calibrated. However, it is not true that our method is a foolproof nor automatic way to calibrate NNs. This is because even HMC itself is not foolproof—in particular, the prior hyperparameters and model misspecification nontrivially affect calibration quality.\n- **Non-Gaussian but still parametric approximation (e.g. VB with Laplace distributions)?** Heavier-tailed parametric approximations might be useful in some cases (e.g. when the basin of attraction locally also has heavy tails). However, a simple counterexample to this is already available in the text: Fig. 6 shows a true posterior that is skewed, so even with e.g. the Laplace distribution, one cannot obtain an accurate approximation. The refinement framework bypasses these prescribed, constrained approximations. I.e. by using refinement, one can flexibly fit complex posterior distributions—just like HMC—in a cost-efficient manner.", " Thanks for the review! We hope that our clarifications below can convince you more—please let us know if you have any additional questions!\n\n- **Clarify the narrative—it’s currently not very focused?** The story of the paper is as follows (see also **_Reviewer qker_** ’s review since they elegantly captured this)---we will clarify this in the next revision: \n - We observe that there are two parts contributing to the inaccuracy in the predictive distribution of a BNNs: (i) the weight-space approximation and (ii) the integration method to do Bayesian model averaging.\n - We show that, while the accuracy of the integration method is impactful (i.e. the number of samples in MC integration), it is less important than the quality of the posterior approximation itself (Table 1). I.e. higher-quality weight-space approximations yield samples that are more efficient—even small-sample MC integration can already yield good predictive distributions.\n - A natural question would be whether alternatives to MC integration, such as linearization, can yield faithful predictive distributions even with inaccurate posteriors. The answer seems generally to be negative, except in a special case where Gauss-Newton Hessian approximation is used.\n - It is thus instructive to use MC integration for general cases but under an accurate posterior approximation. However, accurate approximations such as HMC are very expensive. In this work, we provide a low-cost alternative.\n- **Lacking empirical evaluations?** \n - Regarding other than last-layer baselines for in-distribution experiments, please refer to Daxberger et al., _Laplace redux—effortless Bayesian deep learning_, NeurIPS 2021. We will make this clearer in the next revision.\n - Regarding accuracy, the main goal of the refinement method is to match HMC’s performance (which is considered the gold-standard baseline in literature). Notice that the accuracy of the base Laplace approximation itself is already on par with HMC—the refinement thus doesn’t gain any accuracy in this case. \n - As for comparison against specialized OOD baselines, it is less relevant for BNNs since it has been shown that standard BNNs underperform against them, see Kristiadi et al., _Being a bit frequentist improves Bayesian neural networks_, AISTATS 2022. Note that, our refinement method is compatible with theirs.\n- **When is refinement useful (Compare to temperature scaling)?** You are correct that our method is best suited for obtaining good-quality predictive distributions. Compared to specialized, ad-hoc calibration methods like temperature scaling, our method should perform better due to the fact that it is a principled Bayesian method—see Kristiadi et al., _Being Bayesian even just a bit, fixes overconfidence in ReLU networks_, ICML 2020 for comparisons between (unrefined) Bayesian methods and temperature scaling for calibration and mitigating overconfidence. \n- **Costs vs. last-layer’s size? Scale to ImageNet models?** Using a synthetic dataset to allow for easily controlling the experiment’s parameters (number of classes, number of features, number of data points), we found that the cost of refinement (or the radial flow in general, as implemented in Pyro) is not very sensitive w.r.t. `n_features` or `n_classes`, for standard values that are often used in practice. (We tested up to the values of 4096 and 1000 for the former and the latter, respectively. This takes ~1.2s per epoch for 10k data points.) Combining this observation with Fig. 7 (bottom), we can thus conclude that, in practice, the length of the flow affect the cost more than the problem dimensionality. Fortunately, as we show in Fig. 7 (top), short flows are already sufficient to obtain good performance.", " Thanks a lot for the review! We hope that our clarifications below can address your issues! If not, please do let us know in the discussion phase.\n\n- **Clarify the story?** Yes, you are correct that there are two things affecting the predictive quality, i.e. the quality of the weight-space approximation and the accuracy of the integration method. We will make this clearer in the next revision. Please refer also to our response to **_Reviewer 5mzJ_** for an additional perspective.\n- **Multiclass probit limitations? Make Fig. 5 more obvious?** The MPA only uses the diagonal elements of the covariance over the network outputs, among other approximations, which makes it underconfident. We have added the MPA’s derivation in Appendix A—which makes our point more formal— and made Fig. 5 clearer by changing the colormap. \n- **Refinement’s hyperparameter tuning?** We did not do extensive hyperparameter tuning for refinement: we use Adam with its default hyperparameters along with the cosine annealing for 20 epochs. We use the same setup for all problems.", " The paper studies empirically the current limitations of posterior estimation in Bayesian Neural Networks, focusing on Monte Carlo integrations as a simple golden standard and comparing its problems and limitations to the more recent techniques based on network linearization.\nThe authors then propose a post-hoc posterior refinement technique for posteriors to be estimated through MC integration and validate it empirically against the other analyzed techniques. Strengths\nOverall, the paper is well written. The problem is clearly stated and so are the main approaches to it.\nThe proposed method is simple to understand and looks computationally cheap enough given the advantages it brings, and section 3 has some compelling arguments on why current methods should not be trusted.\n\nWeaknesses\nSection 3.3 feels confusing. It doesn't a good job in convincing the reader that linearization-based alternatives are bad enough to disregard a priori. Figure 5 failed in convincing me that the qualitative estimate of MC or MPA was either better or worse than the competitor.\nEmpirically, the range of experiments is satisfying, however the fact that the tested models are only a Le-Net and a WideResNet leaves the reader wondering how much the results would change with different type of architectures, e.g. even a simple Gated Recurrent Unit on Emotions or Newsgroup-20 or a perceptron on the same proposed datasets. 1. Why Figure 5 should convince the reader that MPA is underconfident?\n\n2. Why figure 2.c should convince the reader that linearization-based approaches are being overconfident? Is there a ground truth I'm missing?\n\n3. Did you try different flows? What should convince the reader that this is a design choice not worth discussing or that it does not have a big impact on the final performance of the method?\n\n**********---------------------**********\n\nAfter the author's response I've increased my score to 6. Big picture limitations were appropriately addressed.\nHowever, I think the paper would greatly benefit from a more structured section or paragraph detailing the computational limitations, especially when compared to other baselines and related work. This doesn't need to be an empirical study of all methods computational costs, but just a clearer outline of the costs implied by each of the mentioned methods.", " First, the authors of this paper propose an experimental analysis of failure modes of a given way of building Bayesian Neural Networks (Bayesian NNs, BNNs), namely, MC integration over samples from an approximate posterior. \nThen, they propose a cheap and accurate method, which avoids these failure modes: replace the last layer of a NN by a variational layer, whose distribution is generated by a trained Normalizing Flow (which spans a much larger space of distributions than the commonly used product of Gaussians), then perform MC integration on top of that. They compare experimentally their method to the \"gold standard\" HMC.\n\n## Failure modes of MC methods\n\nThe authors recall (and prove experimentally) that MC methods do not lead to good predictive performances if the underlying approximation of the Bayesian posterior is too rough. For instance: Laplace Approximation (LA) and Variational Bayes (VB) ; notably, these approximations of the posterior assume Gaussian and independent parameters.\n\nOne of the pitfalls is also the lack of exploration of the parameter space: deep ensembles perform way better, even with very few samples.\n\n## Last-layer Bayesian NNs\n\nThe authors emphasize the idea of building \"Bayesian\" NNs only by replacing the usual last layer by a *Bayesian* last layer. Thus, the proposed methods relies heavily on \"Laplace redux–effortless Bayesian deep learning\" (Daxberger et al., 2021) [6]. Actually, the authors propose an improvement of [6]: instead of performing a simple LA on the last layer, the authors add Normalizing Flow (NF) to it, in order to increase the variety of reachable distributions. ## Strengths\n\nThe authors motivate their approach by an analysis of failure of common methods. So, the proposed method is justified, at least experimentally.\n\n## Weaknesses\n\nThe main weakness of this paper is its significance, compared to what is already known and used.\n\nFor instance, it is already known that sampling NNs from the same \"basin of attraction\" is not working very well compared to HMC and deep ensembles. So, the poor results of VB + MC or LA + MC in Section 3 are not surprising. For instance, see the exhaustive paper \"Bayesian deep learning and a probabilistic perspective of generalization\", Wilson and Izmailov, 2020.\n\nAbout the proposed new method, this is a variation of one of the methods proposed in [6], which is actually an extensive study of the LA in many setups, with practical consideration and available code. \n\n## Other\n\nThe experiments are limited to F-MNIST and CIFAR-10 and CIFAR-100. Since the results (tables 3, 4, 5) are clear and convincing on many aspects (calibration error, distance to the gold standard HMC, OOD detection), this should not be a problem. The phenomenon observed in [6] and by the authors of the current paper, i.e., the importance of the last layer when doing approximate Bayesian inference, is not understood:\n * According to the authors, which set of experiments could help us to understand it? \n * Is this a way for the NN to calibrate itself automatically, which would be impossible if the distribution of the final layer weights was constrained to Gaussians? \n * How would the NN react to other constrained distributions (Laplace, Weibull, etc.)? Is this related to the thickness of the tails?\n\nThe theoretical questions brought by the experiments done by the authors are a strength of the paper. These points should probably be discussed somewhere in the paper, with possibly some preliminary experiments. -", " The paper considers the problem of posterior approximation in Bayesian neural networks. First, the authors show that the accurate approximation with Monte Carlo requires a large number of samples, far exceeding the number of samples typically used by practitioners. Next, the authors show that (1) a large number of samples from a poor posterior approximation and (2) crude analytic approximations to the MC integral both lead to suboptimal performance in Bayesian neural networks. Finally, the authors suggest to use a variational refinement of the Laplace approximation with a normalizing flow to approximate the posterior in last layer BNNs, and show good performance. **Strength 1**. The authors show many interesting observations, e.g. Figure 3 suggesting that 100 samples is insufficient for an MC approximation to even a very simple model, and Figure 4 showing the discrepancy between the MC approximation and analytic approximation to the posterior predictive.\n\n**Strength 2**. The proposed approach is practical, as it is a cheap post-processing step to a pre-trained neural network. The empirical results are promising. \n\n**Strength 3**. It is nice that the authors included MMD distance estimates in Table 3 and showed that the refined posterior is closer to the HMC posterior. \n\n**Weakness 1**. The narrative of the paper is not very focused. In particular, it is not obvious, why the paper discusses the number of samples required for an MC approximation and the quality of analytic linearization-based approximations. The proposed method doesn't follow logically from these experiments, and the connection between these two parts of the paper is relatively weak.\n\n**Weakness 2**. The empirical evaluation is not very strong. In terms of in-distribution performance, the authors only compare the proposed method to last-layer baselines, and achieve an improvement in uncertainty / NLL, but not accuracy (according to appendix Table 5). The results on out-of-distribution detection (appendix Table 7) show improvement over full parameter-space Bayesian methods, but do not include specialized out-of-distribution detection methods.\n\nOverall, the paper shows interesting observations, and some promising results. However, the story feels disconnected, and the empirical observation doesn't prove that the proposed method will be useful for practitioners. **Question 1**. Where do you expect the proposed method to be most useful? It appears that the primary strength is improving the uncertainty calibration for deep neural networks. How does it compare to specialized calibration methods, e.g. temperature scaling? \n\n**Question 2**. How much is the proposed method affected by the size of the last layer, and the number of classes in particular? Is it cheap / computationally feasible to apply it to an ImageNet-scale model? No issues", " The paper addresses the problem of scalable Bayesian inference for large-scale neural networks. The core idea of the paper is to first do a (last layer) Laplace approximation of the posterior of the network weights and then refine it using normalizing flows. That is, the paper proposes to use a Laplace approximation as base distribution in normalizing flows instead of a fixed distribution, e.g. a standardized Gaussian. \n\nThe contribution of the paper is as follows:\n- a discussion of shortcomings of common strategies for computing predictive distributions (by averaging over the approximate posterior). Specifically, they contrast linearization and probit approximations with Monte Carlo integration.\n- propose to use a Laplace approximation as base distribution in a normalizing flows approximation\n- a set of thorough numerical experiments to support the claims.\n The paper proposes to combine two existing approximate inference techniques: Laplace approximations and normalizing flows. The idea is not revolutionary, but it is simple, effective, and original to the best of my knowledge. The paper is very well-structured, well-written and easy to follow. The claims of the paper are not justified theoretically, but they are supported by a set of high-quality numerical experiments.\n\n\nThere are a few things that could be slightly clarified:\n\n- In my perspective, the quality of an approximate posterior predictive distribution depends on two things: 1) the quality of the posterior distribution of the model and 2) the accuracy of the integration method used for averaging over the posterior distribution. Therefore, using high-fidelity integration methods (e.g. MC integration with S -> infinity) will not save you if the quality of the posterior approximation is sufficiently poor. I think this is in line with the message of the paper, but it appears a bit convoluted in the discussion of the effect of the MC sample size on the accuracy of the predictive distribution (in my opinion)\n- The difference between the top and bottom plot in Figure 5 could be emphasized visually and/or in the text. It took me a while to realize the difference.\n\n - Can the explanation described in lines 190-200 be further justified/substantiated? For example by inspecting/visualizing the pairs of approximate posterior and approximate posterior predictive in the two cases?\n- How much tuning was needed for optimizing the flows? Did you use the same optimization procedure/search parameters for all problems? Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 3, 4, 4 ]
[ "nips_2022_Bv8GV6d76Sy", "J9rTh-7RMPC", "keY60D4GAw6", "GHfRnt_jHE0", "dFQ6I689GQw", "xFkngTNg2od", "Ub_jHE6u0km", "nips_2022_Bv8GV6d76Sy", "nips_2022_Bv8GV6d76Sy", "nips_2022_Bv8GV6d76Sy", "nips_2022_Bv8GV6d76Sy" ]
nips_2022_VMU-hMsonit
Training and Inference on Any-Order Autoregressive Models the Right Way
Conditional inference on arbitrary subsets of variables is a core problem in probabilistic inference with important applications such as masked language modeling and image inpainting. In recent years, the family of Any-Order Autoregressive Models (AO-ARMs) -- closely related to popular models such as BERT and XLNet -- has shown breakthrough performance in arbitrary conditional tasks across a sweeping range of domains. But, in spite of their success, in this paper we identify significant improvements to be made to previous formulations of AO-ARMs. First, we show that AO-ARMs suffer from redundancy in their probabilistic model, i.e., they define the same distribution in multiple different ways. We alleviate this redundancy by training on a smaller set of univariate conditionals that still maintains support for efficient arbitrary conditional inference. Second, we upweight the training loss for univariate conditionals that are evaluated more frequently during inference. Our method leads to improved performance with no compromises on tractability, giving state-of-the-art likelihoods in arbitrary conditional modeling on text (Text8), image (CIFAR10, ImageNet32), and continuous tabular data domains.
Accept
This paper introduces an improved training method for auto-regressive generative models. Specifically, the paper identifies a redundancy problem in common auto-regressive models and proposes a way to fix this. The reviewers found the contribution significant and important, and it is likely that the paper will have substantial impact.
train
[ "k9Q4H1ooYvmd", "dqJfuYC_NNfm", "xqodtj0jrUx", "fbD28zy1_id", "fDoVj5SpY9qq", "UBIaDZhtJBX7", "hKJmF35lsE6", "pOWkb1XdMRe", "kbTPnIcqUR8", "40WTtbj6Z2G", "4V88ipLvGD2", "G-Tq2y9WKCV", "QpTB0GDiuD3", "uINKE7DIEJA", "beSdMKgjpHY" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the detailed explanation, which solves my main confusion. I am happy to increase my score to 6 to support the accept of the paper.", " > **In Figure 2 (b) or (c), there is no edge between (x2, x4) and x1, or (x1, x2, x4) and x3. Since both p(x1|x2, x4) and p(x3|x1,x2,x4) are never seen by the model in the training, how does AO-ARM estimate these two conditional probability?**\n\n\nFor the experiments in Section 4.3, where we do compute conditional probabilities, we trained on a different mask distribution (not shown in Figure 2). This mask distribution (call it Q) corresponds to both marginal and conditional queries, so *all the edges are seen by the model during training*. \nEverything else about the training procedure, architecture, inference, etc... remains the same. In other words, to train on Q, we sample queries (which can now be either marginals or conditionals) and apply the decomposition protocol to get a mask, and train on outgoing edges of that mask in parallel. Also, because Q is induced by the decomposition protocol, Q is different from the standard AO-ARM (uniform) distribution, so we do see improvements over the baselines.", " > **sample-dependent but fixed decomposition protocol**\n\nYes this is a good idea, to determine the ordering on the fly as we uncover pixels. We also agree that this should be possible, and perhaps can be learned as well.\n\n\n> **parallelism across positions is impossible without strong assumptions on the decomposition rule?**\n\nWe do think that techniques such as parallelism need to exploit the right task structures and loss functions, and this can be difficult for frameworks (e.g. GFlowNets) that are designed for generic combinatorial tasks that may not have this structure.", " I thank the authors for their response and explanations. However, I still have some confusion on the probability model introduced in the paper.\n\n> Alternatively, we can also evaluate $p(x1, x2, x3, x4)/p(x2,x4)$ as two marginal likelihood estimates, which comes with pros/cons as discussed above.\n\nAs discussed in the updated paper, this can lead to invalid conditional probability estimates where $p(x1, x3)/p(x2,x4) > 1$.\n\n> Then, we can evaluate $p(x1, x3|x2, x4) = p(x1|x2, x4)p(x3|x1,x2,x4)$, as we did for Table 5\n\nI'm still confused about this equation. In Figure 2 (b) or (c), there is no edge between (x2, x4) and x1, or (x1, x2, x4) and x3. Since both p(x1|x2, x4) and p(x3|x1,x2,x4) are never seen by the model in the training, how does AO-ARM estimate these two conditional probability?", " Thanks for the thorough response, clarifications and answers to my questions. I'm glad that you were able to include the new ablation study in the revision. It's too bad about the computational constraints restricting error measurements but I agree with your comment that your paper has provided good evidence that your results are significant.", " Thank you for the detailed response and the updated discussion. After having seen the changes and read all reviews and responses, I am increasing the rating from 5 to 6.\n\nA few thoughts, which are not necessarily points to be addressed in the paper:\n\n> We agree that some choices of canonical orderings may lead to better empirical performance. For example, we actually noticed that a strided ordering (0, 32, 64…, 1, 33, 65…) often did better than the lexicographical (0,1,2,3…) order for images. We had left this observation out of the paper since we thought it was out of scope of the main message, but we have now included it in the new Related Work section.\n\nThis is very interesting. I also wonder about **sample-dependent but fixed** decomposition protocols. For instance, on my first skimming of the paper, I had misread L179 to mean that the variable with the largest *value* is removed, e.g., first generating the dark background and then the foreground of an MNIST digit. This is of course not what was actually done, but it is a possible deterministic protocol that yields different orders of variable indices for different samples.\n\n> [A2] operates on sets, so the variables are not associated with a particular index, whereas AO-ARMs operate on a joint distribution with fixed indices. In other words, in [A2] the main operation is insertion “a cat” -> “a black cat”, whereas for MAC the main operation is unmasking “a [blank] cat” -> “a black cat”. As a result, the interpretation of marginal likelihood is also different in these two settings.\n\nThis is true, but one can imagine how to adapt the training procedure in [A2] to the setting considered in this paper (\"a [blank] cat\" -> \"a black cat\"). \nFor *stochastic* decomposition protocols, perhaps a connection can also be made with inverting noising processes, where the transition distribution has an absorbing state [blank], which appears in recent literature on diffusion in discrete spaces.\n\n> [A4] is also related, using GFlowNets and a reinforcement learning treatment with states, actions and rewards. However it appears that our method scales to larger domains (imagenet32, cifar, text8) versus their approach (MNIST). We suspect this may be because MAC has parallel training, can optimize using MLE instead of energy-based methods, or may be because GFlowNets relies on an RL framework.\n\nIndeed, few scalable algorithms explicitly produce a stochastic decomposition rule. MLE-like models, including the one in this work and those based on denoising, are efficient to train. Could the essential obstacle to combining the two be that parallelism across positions -- like in this paper's equation (7) or in the trick that turns inverting a diffusion SDE into a denoising objective -- is impossible without strong assumptions on the decomposition rule?", " Thank you all for the detailed reviews of our paper. We are happy to hear that the reviewers found the paper “clear and well-written”, with a “simple, intuitive” and “sound” approach and “strong results on a diverse set of problems”. We are also grateful for all the constructive feedback for improving the paper, and have updated the pdf with the new edits, highlighted in red.\n\nHere is a summary of the edits to the pdf:\n\n- A Related Work section discussing other non-autoregressive sequence modeling work, and possibilities of learning a decomposition protocol. [For Reviewer 9KgX]\n- A Limitation section discussing the limitation of the approximate nature of AO-ARMs and MAC model. In short, marginal/conditional estimates may be biased (w.r.t. joint likelihood) but are still valid. [For Reviewer 2pLm and fwZi]\n- Updating the ablation study to include one that just uses insight A. As expected it does slightly better than baseline and slightly worse than using both insights A & B. [For Reviewer HNCy]\n- Clarifying the decomposition protocol and some of the notation for indexing. [For Reviewer 9KgX and HNCy]\n", " > **A lack of error measurements in most of the experiments makes it hard to determine if your results are significant.**\n\nDue to computational limitations we were only able to run multiple seeds for the shared autonomy task. However, we believe that the ablation study and the range of different domains is good evidence that our results are significant and reproducible.\n\n&nbsp;\n\n> **The ablation study shows the effect of using the insights in the order of B->A but I wonder how well A only would do. Did you try to use a uniform distribution over the masks remaining after insight A (pictured in figure 2c)?**\n\nThanks for the suggestion. We had not tried this, but have now added the ablation. Please see the updated Figure 3 with the purple line, which uses only insight A. It gives slight improvements over the baseline, but slightly less than using both insights A and B.\n\n&nbsp;\n\n> **The idea that we want to weight masks more often used (which are the lower cardinality ones) and the cardinality reweighting seem contradictory so I'm not sure how much I believe your intuitive explanations of each idea.**\n\nThis is an interesting point of discussion, and one that we also do not fully understand. Here is our current hypothesis of why CR gives consistent improvements. \n\nThe ultimate goal is to perform well on masks in the test mask distribution. The machinery of MAC allows us to eliminate many edges, so that the test mask distribution has a smaller support, has lower entropy, and in general makes the learning problem easier. MAC then aligns the training mask distribution to the test mask distribution.\n\nHowever, the issue is that there are only $n$ masks of cardinality 1, and there are exponentially many masks of ‘medium’ cardinality. Even though masks of cardinality 1 should be sampled very often, there are so few of them that the model can learn them very quickly. Hence, more effort should be devoted to ‘medium’ cardinality masks, of which there are many and are harder for the model to learn.\n\nIn summary, it’s clear that we want to eliminate unused masks from our training distribution, and that we should roughly try to match the training and the test mask distribution. It’s less clear that we want to match the mask distribution perfectly, since repetitively sampling the super common masks can lead to ‘diminishing returns’. Again, we don’t claim to fully understand this, but this is our current hypothesis. Empirically at least, CR seems to help consistently, and it may be interesting to explore other reweighting strategies that might help even more.\n\n&nbsp;\n\n> **If I understand correctly, your algorithm box doesn't actually reflect how you train the model (in parallel) using the objective in eq 7**\n\nYes, the algorithm box reflects the training procedure without the parallelization, which we omitted for clarity. We can include it in the updated version if you wish.\n\n&nbsp;\n\n> **I found the notation confusing in some places. You use several different types of notation to refer to the same objects - you refer to the conditional distribution of interest as p(x_sigma(t)| x_sigma(<t)), p(x_i_t|x_e_t), p(x_i|x_e), p(x_j| x_e/j).**\n\nSorry for the confusion. We used different notations based on what was most convenient in the context. For example, we use $p(x_\\sigma(t)| x_\\sigma(<t))$ to describe prior work, since those operated on orderings $\\sigma$. For our work, we mostly reason over masks, so we use $p(x_i|x_e)$ or $p(x_j|x_{e \\setminus j})$, depending on whether we are going rightward or leftward on the binary lattice.\n\n&nbsp;\n\n> **I was particularly confused by referring to edges as (i, e) or (i_t, e_t). What is i in this case? I assume from context it's a variable not contained in the mask e due to the summation on line 207 but I don't think it was defined.**\n\nYes, in general we used $i \\in X \\setminus e$ to denote a variable not contained in the mask $e$, and we used $j \\in e$ to denote a variable contained in the mask $e$. We have clarified this in the updated version of the paper.\n\n&nbsp;\n\n> **I'm not sure what your \"sharpen mask distribution\" section is concluding. Is it just that your selected mask distribution has lower entropy (by design) than uniform? You're not showing that low entropy mask distributions actually help, except, possibly, in your earlier experiments.**\n\nWe wanted to use Table 7 to give intuition, in a concrete small-scale setting, on the difference between our distribution and the baseline distribution. In general we think that lower entropy mask distributions should help, but this is not the full story since CR increases entropy but also improves performance (as we discussed earlier).\n\n&nbsp;\n\n> **I think there is a small typo in the caption of Figure 1a: \"An autoregressive (model?) defines ...\"**\n\nThanks, we have fixed this.", " > **maybe limitations of the proposed work can be mentioned or discussed.**\n\nThank you for your review. We have added some discussion of limitations in the new Related Work and Limitations section. We discuss possibilities of learning the canonical ordering, and the limitations of the approximate nature of AO-ARM models in general.", " > **The definition of the decomposition protocol used is hidden in the text (L179) and not illustrated on real data. This makes it hard for a reader to check their understanding and gain intuition.**\n\nWe’ve highlighted this more in the updated version (end of Section 3).\n\n&nbsp;\n\n> **The use of a canonical ordering of data dimensions…. it is not clear that the standard ordering of pixels in the \"reading\" order is optimal for modeling… there is no natural ordering. \"largest\" is replaced by \"smallest\"?**\n\nWe did not explore much the choice of canonical ordering in this paper. Indeed, MAC will work on top of any choice of canonical ordering, since this simply amounts to relabeling nodes on the binary lattice. \n\nWe agree that some choices of canonical orderings may lead to better empirical performance. For example, we actually noticed that a strided ordering (0, 32, 64…, 1, 33, 65…) often did better than the lexicographical (0,1,2,3…) order for images. We had left this observation out of the paper since we thought it was out of scope of the main message, but we have now included it in the new Related Work section.\n\n&nbsp;\n\n> **In particular, how could the algorithm be combined with the learning of a (possibly probabilistic) decomposition protocol?**\n\nLearning the canonical ordering / decomposition protocol is an interesting avenue to pursue. As mentioned above, we empirically saw that strided ordering does better than the lexicographical ordering, so using a principled framework for learning the ordering / protocol could lead to promising improvements. We also agree that the citations provided (e.g. A2) would be suitable places to start.\n\n&nbsp;\n\n> **The discussion of related work could be improved. A few papers [A1, A2, A3, A4] that have addressed the question of non-autoregressive modeling have not been mentioned. What is the relationship of MAC with such approaches [A2, A4]?**\n\nThank you for the suggested references. We have cited these works and discussed their relation in the new Related Work section.\n\n[A2] operates on sets, so the variables are not associated with a particular index, whereas AO-ARMs operate on a joint distribution $(X_1, …, X_n)$ with fixed indices. In other words, in [A2] the main operation is insertion “a cat” -> “a black cat”, whereas for MAC the main operation is unmasking “a [blank] cat” -> “a black cat”. As a result, the interpretation of marginal likelihood is also different in these two settings.\n\n[A4] is also related, using GFlowNets and a reinforcement learning treatment with states, actions and rewards. However it appears that our method scales to larger domains (imagenet32, cifar, text8) versus their approach (MNIST). We suspect this may be because MAC has parallel training, can optimize using MLE instead of energy-based methods, or may be because GFlowNets relies on an RL framework.\n", " To clarify terminology, we refer to marginals as quantities such as $p(x2, x4)$, and conditionals as quantities such as $p(x1, x3 | x2, x4)$.\n\n&nbsp;\n\n> **Using Figure 2 (b) as an example, in the proposed method, the following equation might hold $p(x2,x3,x4)>p(x2,x4)$, which means $\\mathbb{E}_{\\sigma} p(x3|x2,x4)>1$. Based on this, I think the mathematical foundation in the proposed model is invalid.**\n\nThere are two points on this.\n\n- Indeed, evaluating conditionals as two marginals can be problematic, and this is a limitation of both standard AO-ARMs and MAC. Even in standard AO-ARMs, it could be the case that $p(x2,x3,x4)>p(x2,x4)$ (easy to construct an example if you wish). This is because marginal likelihoods are only evaluated using compatible orderings, not using all possible orders.\n\\\n\\\nIn fact, this means marginal and conditional estimates for both AO-ARM / MAC in general can be biased.\n\n- Despite the bias, we can still ensure that the conditional probability estimate is valid by evaluating the conditional directly (and not as two marginals). We can do this for MAC as well. Figure 2 showed a version of MAC that only optimized for marginal likelihoods. In Section 4.3 and Tables 4 & 5, we used a version of MAC that optimized for both marginal and conditional likelihoods by training on the corresponding mask distribution.\n\\\n\\\nTo give some intuition on conditional likelihood queries, these correspond to paths between two nodes on the lattice. For these, MAC also provides an important improvement over baseline AO-ARMs. MAC will focus training on a single path between two nodes, whereas AO-ARMs will uniformly sample paths between two nodes.\n\nWe’ve added these discussions to the Limitation section in the updated version.\n\n&nbsp;\n\n> **The paper might need to get more order samples to reduce the variance in likelihood estimation. Still, no such information or discussion is provided in the paper.**\n\nActually, it is the opposite. Our method essentially has no variance due to the deterministic ordering protocol, whereas standard AO-ARMs require more samples to reduce their variance (in practice, as you said, they just take a single sample). The discussion in Appendix A may also be relevant.\n\n&nbsp;\n\n> **Clearly, a biased order distribution can produce better perplexity, as it is trained more on a specific order. And therefore, in principle, it should not perform better than real AO-ARMs, but this is not reflected in the experimental results of Table 1, which caused my confusion.**\n\nOur method uses a biased order distribution, so it produces better perplexity, as you said. So, our method should perform better than standard AO-ARMs. This is reflected in Table1, where lower is better, and at 3000 epochs our method gives 1.40 and AO-ARM gives 1.48.\n\n&nbsp;\n\n> **My question is on the evaluation of marginal likelihood. Let's again take Figure 2 (b) as an example, I'm wondering how proposed method can compute the marginal likelihood of $p(x1,x2,x3,x4|x2,x4)$ if there is no edges between (2, 4) and (1, 2, 3, 4)?**\n\nWe write $p(x1,x2,x3,x4|x2,x4)$ as $p(x1,x3|x2,x4)$, using the notation in the paper. This is a conditional likelihood estimate, whereas the diagram in Figure 2b focused on marginal likelihood estimates. To handle a mix of both marginal and conditional likelihood estimates (e.g. Section 4.3) we use a training distribution that matches the corresponding mixed test mask distribution. Then, we can evaluate $p(x1,x3|x2,x4)=p(x1|x2,x4)p(x3|x1,x2,x4)$, as we did for Table 5. Alternatively, we can also evaluate $p(x1,x2,x3,x4) / p(x2, x4)$ as two marginal likelihood estimates, which comes with pros/cons as discussed above.\n", " The paper argued that one current limitation of Any-Order Autoregressive Models (AO-ARMs) is that they suffer from redundancy in their probabilistic model, i.e., they define the same distribution in multiple different ways. Especially, to my understanding, the authors mean that a sequence of length $n$ can be defined by $n!$ orders, which causes the model to under-fitting any-order data distribution as observed in XLNet or ARDMs. A particular order (i.e., $w$-MAC, Mask-tuned Arbitrary Conditional Model) is introduced to remove such redundancy by sampling the order from a non-uniform decomposition protocol $w$. During the inference (which only contains the likelihood estimation task in this paper), the order is also sampled from $w$-MAC, rather than uniform distribution, and thus achieves better likelihood estimation (somewhere between left-to-right autoregressive model and real AO-ARMs). Strengths:\n\n1. The main strength of the paper is that it finds a (maybe principled) way to tweak the order distribution in AO-ARMs to get a better likelihood number (which also might not be true, as I argued in the Weaknesses). This could potentially improve the quality of tasks that require sampling from an AO-ARMs, and in the meantime, can produce a valid probability given any observed and unobserved variables.\n\n2. A heuristic cardinality reweighting trick is introduced in Section 3.2 and explained in Section 5. This may inspire future research in this direction.\n\n3. Despite the high complexity, the model can still be trained in parallel.\n\nWeaknesses:\n\nI mainly have two concerns.\n\n1. Using Figure 2 (b) as an example, in the proposed method, the following equation might hold $p(x_2, x_3, x_4) > p(x_2, x_4)$, which means $\\mathbb{E}_{\\sigma} p(x_3 | x_2, x_4) > 1$. Based on this, I think the mathematical foundation in the proposed model is invalid.\n\n2. In the evaluation, all the previous work samples a random order $\\sigma$ from the uniform distribution because of the following assumption:\n 1. \"In other words, it does not matter which step $t + k$ the model predicts, in expectation, these all have the same associated likelihood.\" from [1]\n 2. \"We note that a model which perfectly captures the true densities would give consistent likelihoods for all possible orderings (thus evaluating only one ordering would suffice).\" AND Algorithm 1 from [2]\n\nHowever, in this work, $\\sigma$ is no longer from a uniform distribution. Therefore, in order to get a precise likelihood estimation, the paper might need to get more order samples to reduce the variance in likelihood estimation. Still, no such information or discussion is provided in the paper.\n\n[1] Autoregressive Diffusion Models\n[2] Arbitrary Conditional Distributions with Energy\n 1. Clearly, a biased order distribution can produce better perplexity, as it is trained more on a specific order. And therefore, in principle, it should not perform better than real AO-ARMs, but this is not reflected in the experimental results of Table 1, which caused my confusion. My question is on the evaluation of marginal likelihood. Let's again take Figure 2 (b) as an example, I'm wondering how proposed method can compute the marginal likelihood of $p(x_1, x_2, x_3, x_4 | x_2, x_4)$ if there is no edges between (2, 4) and (1, 2, 3, 4)? The work does not have an obvious potential negative societal impact.", " A method for training nonautoregressive generative models of high-dimensional (e.g., sequence) data is proposed. The main algorithm requires the choice of a decomposition protocol, which is a choice of generation order defined by a deterministic Markovian reverse generation process. The training objective is equivalent to a weighted log-likelihood of each generation step in the chosen order leading to a randomly masked training sample. Strong results are shown on a variety of domains. Strengths:\n- The proposed algorithm is simple and intuitive and the derivation is sound.\n- Strong results on a diverse set of problems, including language, vision, tabular data, and robot motion planning. Evaluations and comparisons are solid.\n\nWeaknesses: (I am willing to raise the score if the authors discuss the limitations and relationship with past work in their response and commit to adding them to the paper.) \n- The definition of the decomposition protocol used is hidden in the text (L179) and not illustrated on real data. This makes it hard for a reader to check their understanding and gain intuition. \n- The use of a canonical ordering of data dimensions seems like a limitation of the approach, but it is not discussed. For example, even in the image domain, it is not clear that the standard ordering of pixels in the \"reading\" order is optimal for modeling. In other domains, such as tabular data and other settings where incremental generation in arbitrary order is possible (e.g., generic fixed- but high-dimensional data or data of variable size, such as graphs, point clouds, etc.), there is no natural ordering. Please see the questions below.\n- The discussion of related work could be improved. Please see the second question below. - Would the results change if \"largest\" is replaced by \"smallest\" (or largest w.r.t. some random but fixed permutation of the dimensions) in the definition of the decomposition protocol (L179)? The definition relies upon a choice of natural ordering in the data space, so it may be important.\n - In particular, how could the algorithm be combined with the learning of a (possibly probabilistic) decomposition protocol?\n- Relation to past work.\n - A few papers that have addressed the question of non-autoregressive modeling have not been mentioned. For instance, [A1] is a relevant citation, while [A3] suggests an alternative approach that uses a continuous relaxation to learn generation order.\n - (Also related to the first question:) There are several other algorithms that feature a learned generation order, albeit in slightly different settings. In [A2], non-uniform generation order emerges implicitly from a greedy optimization procedure. In [A4], which uses a similar framing of chains on a lattice, the learned \"backward policy\" is a stochastic, data-dependent decomposition protocol. What is the relationship of MAC with such approaches?\n\n[A1] B.Uria et al., Neural autoregressive distribution estimation (JMLR, 2016). arXiv:1605.02226\n\n[A2] D.Emelianenko et al., Sequence modeling with unconstrained generation order (NeurIPS 2019). arXiv:1911.00176\n\n[A3] X.Li et al., Discovering non-monotonic autoregressive orderings with variational inference (ICLR 2021). arXiv:2110.15797\n\n[A4] D.Zhang et al., Generative flow networks for discrete probabilistic modeling (ICML 2022). arXiv:2202.01361 Please see the second point of Weaknesses above. ", " The paper improves Any-order ARO models by making them more efficient reducing redundancy in computations, while maintaining tractability and by better aligning the training and inference objectives.\n\nThe key idea is to reformulate the objectives in AO-ARO such that redundancies in computing the univariate conditionals can be fully exploited. Further, by tuning the training based on the distribution that represents the importance of univariate conditionals, the training is better suited to compute marginals during inference.\n\nExperiments are performed on several tasks and comparison studies show the improvements the proposed approach offers compared to state of the art 1. The paper is written very well clearly defining the problem, the background work and its own novel contribution.\n2. The work is meaningful and significant considering the generality of computing marginal probabilities (by integrating from high-dimensional data)\n3. The experiments seem to cover many areas, standard benchmarks from computer vision, language models, robotics, etc. The variety of experiments clearly shows the strengths of this work.\n\nWeakness\n\nWhile this may not be a weakness, maybe limitations of the proposed work can be mentioned or discussed.\n\nOverall, I thought the paper proposed a significant problem, clearly outlined a good solution and conducted an excellent empirical study.\n None They are not explicitly addressed. It would be nice to have a discussion on these in the paper.", " The authors present the Mask-tuned Arbitrary Conditional Models - a method for training Any-order Autoregressive Models.\nThey argue that the paper presents two insights hurting conventional AO-ARM training which they address with their method. \n\nThe authors investigate their method with an ablation study and present it on a variety of experiments. \n\n Strengths: \n\nThe explanations in the paper are well done and overall I found the paper clear and well-written. \n\nAblation study well isolates the components of your method and the contribution of each of them.\n\nExperiments are clear and well presented. A good variety of experiments were presented.\n\nThe authors state that it is not arbitrary joint distribution decomposition ordering that we really care about but, instead, arbitrary conditioning. They show that an ordering over every marginal can be specified, or a distribution over orderings, to address redundancy in training.\n\nThe authors show that tuning the mask distribution for training can lead to better results - an important idea that can be investigated further. Although the idea is straight-forward, I view this as a strength of the method as it is a simple, effective modification to the standard training techniques that yields better results with, as the authors put it, no compromises.\n\nWeaknesses: \nA lack of error measurements in most of the experiments makes it hard to determine if your results are significant. \n\nThe idea that we want to weight masks more often used (which are the lower carnality ones) and the cardinality reweighting seem contradictory so I'm not sure how much I believe your intuitive explanations of each idea. The ablation study shows the effect of using the insights in the order of B->A but I wonder how well A only would do. Did you try to use a uniform distribution over the masks remaining after insight A (pictured in figure 2c)?\n\nIf I understand correctly, your algorithm box doesn't actually reflect how you train the model (in parallel) using the objective in eq 7. There is a lot of machinery presented (although interesting) to arrive at the conclusion that the MAC objective (eq 7) is the standard AO-ARM objective (eq 4) with a different mask sampling distribution.\n\nA subjective point: I found the notation confusing in some places. You use several different types of notation to refer to the same objects - you refer to the conditional distribution of interest as p(x_sigma(t)| x_sigma(<t)), p(x_i_t|x_e_t), p(x_i|x_e), p(x_j| x_e/j). I was particularly confused by referring to edges as (i, e) or (i_t, e_t). What is i in this case? I assume from context it's a variable not contained in the mask e due to the summation on line 207 but I don't think it was defined. \n\n I'm not sure what your \"sharpen mask distribution\" section is concluding. Is it just that your selected mask distribution has lower entropy (by design) than uniform? You're not showing that low entropy mask distributions actually help, except, possibly, in your earlier experiments. \n\nI think there is a small typo in the caption of Figure 1a: \"An autoregressive (model?) defines ...\"\n\n A small discussion on the mask distributional mismatch is included. Generally, limitations, or the lack thereof, are not discussed. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "dqJfuYC_NNfm", "fbD28zy1_id", "UBIaDZhtJBX7", "4V88ipLvGD2", "pOWkb1XdMRe", "40WTtbj6Z2G", "nips_2022_VMU-hMsonit", "beSdMKgjpHY", "uINKE7DIEJA", "QpTB0GDiuD3", "G-Tq2y9WKCV", "nips_2022_VMU-hMsonit", "nips_2022_VMU-hMsonit", "nips_2022_VMU-hMsonit", "nips_2022_VMU-hMsonit" ]
nips_2022_nax3ATLrovW
Versatile Multi-stage Graph Neural Network for Circuit Representation
Due to the rapid growth in the scale of circuits and the desire for knowledge transfer from old designs to new ones, deep learning technologies have been widely exploited in Electronic Design Automation (EDA) to assist circuit design. In chip design cycles, we might encounter heterogeneous and diverse information sources, including the two most informative ones: the netlist and the design layout. However, handling each information source independently is sub-optimal. In this paper, we propose a novel way to integrate the multiple information sources under a unified heterogeneous graph named Circuit Graph, where topological and geometrical information is well integrated. Then, we propose Circuit GNN to fully utilize the features of vertices, edges as well as heterogeneous information during the message passing process. It is the first attempt to design a versatile circuit representation that is compatible across multiple EDA tasks and stages. Experiments on the two most representative prediction tasks in EDA show that our solution reaches state-of-the-art performance in both logic synthesis and global placement chip design stages. Besides, it achieves a 10x speed-up on congestion prediction compared to the state-of-the-art model.
Accept
This paper proposes a GNN approach to EDA using the construction of a circuit graph that combines geometric and topological information, as well as features generated from physical properties of circuit components. While reviewers have raised certain concerns (some addressed already in rebuttal), they all settled (post rebuttal) on recommending weak accept of the paper. I agree with them and think the NeurIPS audience would benefit from the inclusion of this work in the program, and therefore I recommend acceptance. I would like to encourage the authors to take into account the comments and discussion with the reviewers, as well as incorporate materials presented in their responses, when preparing the camera ready version.
train
[ "jKEVS6PRE5R", "2N6eK2vaYBe", "D2Wg35JMDtL", "7vp-m2CeXa", "zT9YofH_Vp", "aviiJRLS0tO", "-PATnDXsN_md", "KyH5KMK_K4j", "TO75D-31Eqo", "KvnG-JaMxs6", "aM-7tHpCXNk", "h18LIFlJ8vd", "6AY9psURFST", "s-NDb9krlhC", "fxG-3WqEPBB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response and additional results.\n\nRegarding to the support of EDA stages, this is the sentence copied from the Introduction section \"To our best knowledge, this is the first unified circuit representation approach that can be easily compatible across EDA tasks and stages.\", which gives people an impression that the proposed approach works for all EDA stages. I would encourage authors to rephrase this sentence in the next revision to avoid misleading.\n\nOverall, most of my major concerns are addressed after rebuttal, though a few minor concerns still exist (e.g., runtime speedup is too marginal for the wirelength prediction task in placement stage). Considering the contribution of this work to both GNN and EDA communities, I raise my score to 6. Authors are strongly encouraged to include their rebuttal responses (e.g., all the additional experiments such as MPNN and clarifications such as compatibility across EDA stages) in the final version.", " ## For generating both features and labels with DREAMPlace\n\nDREAMPlace is an open platform which collects lots of EDA tools e.g. RUDY, NCTUgr. Many AI4EDA works[1] [2] [3] use these tools to generate raw features and labels to construct machine learning tasks, where the DL models are evaluated. It's also common in other machine learning fields besides of AI4EDA. Take molecular machine learning as example. The researchers usually use RDKit(https://rdkit.org/), a open-source cheminformatics software to generate **atom/bond raw features as well as targets (e.g. conformation and energy)** [4] [5] [6], while none of them argue that the experiments are meaningless if these data come from the same toolkit.\nTherefore, it is universally acknowledged to conduct the experiments with features and labels from non-AI tools and **check how efficiently and effectively DL models can learn from raw features and handle the tasks**.\n\n## For other stages like data-flow\nWe claim in the paper that our method is compatible across multiple EDA tasks and stages. This claim is backed by the experiment results of two EDA tasks (congestion prediction and net wirelength prediction) at two EDA stages (logic synthesis and placement). The input of our model is circuit netlist, which is a collection of electronic components connected by physical wires. Netlist is only available after technology mapping in logic synthesis. However, there are other circuit representations in early EDA stages like data-flow graph in high-level synthesis or And-Inverter Graph (AIG) in early logic synthesis stage. Our method might not apply to these graphs as their nodes and edges have different meaning. To make the claim more accurate, we update the limitation section of the paper with following: \n\"In early EDA stages, other circuit representations like dataflow graph or And-Inverter Graph (AIG) graph might be used. As the meaning of nodes and edges in these graphs are different from netlist graph used in the paper, our method might not apply to these graphs used in early EDA stages.\"\n\n[1] Ghose et al. Generalizable Cross-Graph Embedding for GNN-based Congestion Prediction. In ICCAD 2021\n\n[2] Wang et al. LHNN: Lattice Hypergraph Neural Network for VLSI Congestion Prediction. In DAC 2022\n\n[3] Xie et al. Net2: A Graph Attention Network Method Customized for Pre-Placement Net Length Estimation. In ASPDAC 2021\n\n[4] Xiong et al. Pushing the Boundaries of Molecular Representation for Drug Discovery with the Graph Attention Mechanism. In Med Chem 2020 Aug 27;63(16)\n\n[5] Li et al. Conformation-Guided Molecular Representation with Hamiltonian Neural Networks. In ICLR 2021\n\n[6] Yang et al. Deep Molecular Representation Learning via Fusing Physical and Chemical Information. In NeurIPS 2021", " ## For MPNN\n\nThank you for your suggestion. We will include our discussion about the advantages of our proposed method over other GNN baselines that incorporate edge features (MPNN) into an improved future version.\n\n## For results in Table 10\n\nWe are sorry for this mistake. The original version of Table 10 only includes the training time of the baselines. We have submitted an updated version and included the inference time comparison results we mentioned between ours and DREAMPlace on *superblue19* (1.6s vs. 285s for the logic synthesis phase, and **3.51s vs. 3.63 in placement stage**) in Appendix F.\n\n## For the runtime of DREAMPlace in net length prediction\n\nSince the reviewer explicitly ask for the runtime comparison of DREAMPlace [1], we believe it is necessary to fully explain the working mechanism of DREAMPlace in the global placement stage and its relation to our proposed Circuit GNN predictor. DREAMPlace is a convenient open platform placer to solve region-constrained global placement problems. It proposes a neural network-based analytical solution to solve the global placement problems efficiently. To provide the feedback signal of the quality of the current design layout, there are several metrics that are important to consider, including wirelength, congestion and etc. DREAMPlace has integrated famous global routers such as NCTUgr [2] and Rudy [3] to provide an estimation of their design layout after each training step in their analytical solution. \n\nHowever, the existing accurate estimation of the congestion given the current layout is expensive, such as NCTUgr [2], which makes the feedback signal generation process to become the efficiency bottleneck of the global placement stage. This aspect provides the motivation for the following learning-based algorithm[3] [4] [5] to learn a proxy function and provides a relatively accurate estimation of routing congestion values with a much faster runtime. One work follows the same motivation.\n\nIn our paper, we did not claim our method is able to replace the DREAMPlace framework since our proposed method is only able to predict the metrics of interest such as congestion and wirelength. It is unable to replace the placer tool such as DREAMPlace. For the global placement stage, the runtime improvement we claim over DREAMPlace is coming from using the proxy predictor over the classic congestion estimation tool when we are generating the quality feedback signal for each design layout. In the global placement stage, on the largest *superblue19* circuit, our inference time for generating the congestion label is around 4.09s and 3.51s for wirelength label (in total 7.6 s for each iteration step) while using the conventional NCTUgr and HPWL to estimate the ground truth label will take 55.2s and 3.63 s respectively (in total 58.8 s for each iteration step), which is 7.7 times slower than our proposed Circuit GNN per each iteration step.\n\nIndeed, on merely the wirelength prediction task, our runtime on *superblue19* is only marginally improved HPWL. However, the wirelength prediction is only a support task in the placement stage since the way we are representing the netlist and layout geometry information in our proposed Circuit Graph contains the **Nets level** representation. Thus, with a simple pooling layer and a decoder, we can easily obtain the wirelength prediction label. The wirelength predictor we proposed is not where the main source of runtime speedup comes from. Its effectiveness is more convincing in the logic synthesis stage as we do not even need to rely on the post placement layout information to provide a rough estimation in the earlier design stage.\n\n[1] Jiaqi Gu et al. DREAMPlace 3.0: Multi-Electrostatics Based Robust VLSI Placement with Region Constraints. In ICCAD 2020.\n\n[2] K. Dai, W. Liu, and Y. Li. NCTU-GR: Efficient simulated evolution-based rerouting and congestion-relaxed layer assignment on 3-d global routing. IEEE TVLSI, vol. 20, no. 3, pp. 459-472, 2012.\n\n[3] P. Spindler and F. M. Johannes. Fast and accurate routing demand estimation for efficient routability-driven placement. in Design, Automation & Test in Europe Conference & Exhibition, 2007. \n\n[4] R. Kirby, S. Godil, R. Roy, and B.\nCatanzaro. Congestionnet: Routing congestion prediction using deep graph neural networks. in VLSI 2019.\n\n[5] Ghose et al. Generalizable Cross-Graph Embedding for GNN-based Congestion Prediction. In ICCAD 2021.\n\n[6] Wang et al. LHNN: Lattice Hypergraph Neural Network for VLSI Congestion Prediction. In DAC 2022.\n", " Thanks for all reviewers' constructive suggestions, which help us find some points we didn’t explain clearly. Now we have got them clarified in a new version. The revision mainly includes:\n- A new baseline (MPNN) is added into Congestion Prediction (see **Table 1**), and the explanation of why our message function is better than MPNN in **Section 5.3**.\n- More details of baselines (e.g. # of parameters and inference time, see **Table 16-19** in **Appendix F**), and the inference time of each step of DREAMPlace (see statement in **Appendix F**).\n- A transfer learning task to further evaluate the representativeness of extracted GNN features of SOTA LHNN and our model (see **Appendix G**).\n- Model Sensitivity Experiments (see **Appendix D**), containing:\n\t1. The justification of topological and geometrical message function choices;\n\t2. The justification of using max-pooling to fuse topological and geometrical information;\n\t3. The justification of concatenating raw features in readout representation;\n\t4. The justification of designing geom-edges instead of directly encoding cell positions.\n- Update **Limitation** in **Section 6**.", " Thank you for your detailed responses. My remaining concerns are listed below:\n\n**MPNN**\n\nI would encourage authors to include MPNN results in the next revision and highlight the advantage of the proposed model over MPNN, which would make the paper stronger.\n\n**DreamPlace comparison**\n\nI could not find the results that authors mentioned (1.6s vs. 285s) in Table 10. More importantly, I was asking the runtime comparison between DreamPlace and the proposed approach **in the placement stage**, rather than the logic synthesis stage. If there are no such results to show the runtime speedup, it is unclear about the motivation why we need to apply ML-based methods for net wirelength prediction in the placement stage.", " Thank you for your detailed responses. My remaining concerns are listed below:\n\n**Follow-up on the first question:**\n\nSince DreamPlace is used to produce ground truth, why do we even need to generate node features by DreamPlace and then apply GNN? In other words, we already know the ground truth once we obtain node features by DreamPlace, so what's the point of using GNN to predict the ground truth again?\n\n**Follow-up on the third question:**\n\nIf the proposed approach cannot work on data-flow graphs in the high-level synthesis stage, authors should clearly mention this limitation in the paper, instead of claiming that the proposed approach is compatible across EDA stages.\n", " We appreciate your constructive review and valuable questions. Here are some points we would like to clarify:\n\n## About your questions 1-2\n\n1. Our motivation is to propose a versatile model to handle diverse EDA tasks on multi-stage circuits without repeatedly designing individual models:\n 1. In Circuit Graph, we use topo-edges and geom-edges to capture the topology and geometry, which determine most of the properties of a circuit. \n 2. In Circuit GNN, to capture the deep topological and geometrical relationships and generate informative representations for diverse EDA tasks, we pass the messages through topo-edges and geom-edges and fuse them at the end of each layer. \n \n The experimental results in Sec. 5 show that our model can generate informative representations with potential transferability to handle diverse EDA tasks.\n \n It is a good idea to further evaluate the representativeness of extracted GNN features by conducting a transfer experiment. Following your suggestion, we train Circuit GNN/LHNN with Congestion Prediction task and evaluate/fine-tune them with Net Length Prediction:\n \n Some experiment settings: \n \n - For the **evaluation setting**, a different readout module is trained from scratch, but GNN’s parameters are fixed.\n - For the **fine-tuning setting**, a different readout module is trained from scratch, and GNN’s parameters are fine-tuned at the same time.\n - For **evaluation & fine-tuning**, LHNN and Ours are only trained for 1/5 epochs of default setting, to show the transferability of learned features.\n \n | | time | pearson | spearman | kendall |\n | --- | --- | --- | --- | --- |\n | MLP | 2.22 | 0.493 | 0.547 | 0.415 |\n | LHNN (evaluate) | 192.45 | 0.689 | 0.715 | 0.563 |\n | Ours (evaluate) | 9.55 | **0.799** | **0.811** | **0.622** |\n | LHNN (fine-tune) | 248.96 | 0.805 | 0.794 | 0.612 |\n | Ours (fine-tune) | 14.80 | **0.842** | **0.829** | **0.639** |\n | LHNN | 260.00 | 0.801 | 0.796 | 0.603 |\n | Ours | 14.79 | 0.848 | 0.835 | 0.646 |\n \n The results show that the knowledge Circuit GNN learns from Congestion Prediction can be easily transferred to another task (no matter for direct use or fine-tuning), while LHNN is weak in transferability.\n \n2. From the traditional EDA routing tools, NCTUGR [2] is one of the most commonly used tools for estimating the congestion values given the circuit layout. To compute the congestion label for the testing circuit we used (*superblue19*), the run time is about 50s, which is considerably much higher than the inference time for our proposed circuit GNN (4.1s) as well as other learning based method in the literature. Besides, the inference time of Circuit GNN on *superblue19*[1] (with 496045 cells, 515951 nets, and 1912420 pins) is 4.09s for Congestion Prediction and 3.51s for Net Length Prediction. It is around an order of magnitude faster than SOTA LHNN (65.21s and 21.41s).\n", " We greatly appreciate your careful and detailed review. Here are some points we would like to clarify:\n\n## Justification for our Technical Significance\n\nThe main technical significances of our solution are VERSATILITY and EFFICIENCY, which encourage us to find a **simple, general, compatible but effective** solution for netlist representation in the EDA field. To allow the proposed solution to have potential practical and commercial values, it will require the learning-based solution to achieve a significantly faster inference time compared to the traditional EDA routing tool, as well as a reasonable training time. For example, on the largest circuit we used in our paper (*superblue19*), estimating the congestion for each layout using the conventional tool NCTUGR [8] will take around 51 s. A learning-based method should have a significantly faster inference time (4s) to have a convincing argument to replace the conventional EDA tools. This particular requirement limits us from designing complicated and inefficient components in our model. For example, computationally more demanding meta path based Heterogeneous graph neural network method [9] suggested by the the reviewer might be potentially less favorable for this reason. \n\nThe Circuit Graph and Circuit GNN are designed to guarantee VERSATILITY and EFFICIENCY, as we state below:\n\n### Justification for our netlist graph construction (Circuit Graph)\n\n**FOR VERSATILITY.** We use topo-edges and geom-edges to capture the topology and geometry, which determine most of the properties of a circuit. For example, circuit congestion is mainly caused by crowdedly-placed cells (geometry) and dense nets (topology), while net length is determined by cells’ connectivity via nets (topology) and the distances among connected cells (geometry). Therefore, our Circuit Graphs can be used as inputs for diverse EDA prediction tasks.\n\nTo **generalize** the geometrical information among circuits and tasks, we use cell-pair distances as the raw features of geom-edges, which **are invariant to translation and rotation** [3]. Besides, the distances can cover most of the geometrical information the models need to handle the tasks (e.g. the crowdedly-placed cells and distances among connected cells mentioned above).\n\nFor circuits in logic synthesis stage, the Circuit Graph can **compatibly** process them by masking the geom-edges, where most of the topological information can still be preserved and used to handle various tasks.\n\n**FOR EFFICIENCY.** We accelerate the construction of geom-edges with shift-windows. Even VLSI with millions of cells and nets can be converted to Circuit Graph with a time cost linear to its scale (see Appendix B.2).\n\n### Justification for our GNN model (Circuit GNN)\n\n**FOR VERSATILITY.** To capture the deep topological and geometrical relationships and generate informative representations for diverse EDA tasks, we not only pass the massages through topo-edges and geom-edges, but also **FUSE** them at the end of each layer.\n\nFor input Circuit Graphs with no geom-edges (logic synthesis stage), the Circuit GNN can compatibly process them by masking the geometrical message-passing, where most of the topological information can still be preserved in output representations to serve downstream tasks.\n\n**FOR EFFICIENCY.** We use simple but effective message functions and fusion strategy to collect the topological and geometrical information. See more details in **Discussion of Inference Time** at the end of Sec. 4.2.", " ## Experiments\n\n1. **MPNN.** Beyond traditional GNNs (GCN et al.), we also compared our model against various advanced GNNs specially designed for EDA problems, e.g. CongestionNet, Net^2^ and LHNN. We considered comparing it with MPNN, but MPNN is originally designed to handle small molecules rather than VLSI. The molecular graphs are small in scale but embed rich semantics in bonds (edges), so it is reasonable for MPNN to use a time-expensive edge message function ($O(H_v^2H_e)$, slower than our $O(H_vH_e)$). However, MPNN seems to be costly and underused when applied to large circuits with millions of nodes and edges. Here are some results: (Note that the hidden dimensions are the same across baselines. **N** refers to Node-level and **G** refers to Grid-level)\n \n \n | | time | pearson (N) | spearman (N) | kendall (N) | pearson (G) | spearman (G) | kendall (G) |\n | --- | --- | --- | --- | --- | --- | --- | --- |\n | GCN | 9.43 | 0.777 | 0.265 | 0.199 | 0.221 | 0.366 | 0.260 |\n | Ours (w/o. geom.) (MPNN conv.) | 116.24 | **0.780** | **0.289** | **0.217** | 0.292 | 0.458 | 0.319 |\n | Ours (w/o. geom.) | 21.62 | 0.779 | **0.289** | **0.217** | **0.315** | **0.468** | **0.329** |\n\n1. **Hyperparameters.** We tried to tune the hyperparameters in Net^2^ and LHNN (e.g. the hidden dimension, layer number, and edge sample number), but there has no overall improvement, so we kept the default settings. For example, the influence of the hidden dimension of LHNN shows below:\n \n \n | hidden dim. | time | pearson (G) | spearman (G) | kendall (G) |\n | --- | --- | --- | --- | --- |\n | 16 | 205.05 | 0.703 | 0.698 | **0.545** |\n | 32 (default) | 305.47 | 0.700 | **0.701** | 0.540 |\n | 64 | 340.42 | **0.707** | 0.696 | 0.541 |\n2. **DREAMPlace Comparison.** The inference time of our model is 178x faster than DREAMPlace (1.6s vs. 285s) when predicting net length for circuit **superblue19** in logic synthesis stage (see Table 10), as DREAMPlace should do placement first before adopting HPWL to calculate net length.\n \n Besides, we use DREAMPlace, a popular non-AI toolkit, to generate the labels with HPWL to construct Net Length Prediction task, similar to the experiment settings in [7]. The key point of conducting this experiment is to show that the representation learned by our model can be efficiently and effectively applied to multiple circuit stages and EDA tasks. In contrast, the existing DL models are not as versatile.", " ## About Your Questions\n\n1. DREAMPlace is a publicly available and easily setup toolkit for researchers studying VLSI placement problem. In our paper, for all the tasks (congestion prediction and wirelength prediction), we use DREAMPlace to help generate features and ground truth labels to construct learning based EDA tasks, where deep learning methods are expected to learn the good mapping function from the input features to the ground truth labels using the generated data. This setting is following the experiment settings in [1]. \n2. On one hand, there are usually more geom-edges than topo-edges in Circuit Graph (3.4M geom-edges and 1.9M topo-edges in *superblue19*[2]), so for geom-edges we prefer edge-weight summation rather than inner product, which is $F_{\\cal U}$ (hidden dimension of net) times more expansive in computation. On the other hand, it is also reasonable for geom-edges to use edge-weight summation because geometrically closer cells have a stronger relationship. Still, we test the performance when topo-edges use edge-weight summation or geom-edges use the inner product:\n \n \n | | time | pearson (N) | spearman (N) | kendall (N) | pearson (G) | spearman (G) | kendall (G) |\n | --- | --- | --- | --- | --- | --- | --- | --- |\n | topo w. edge-weight summation | 25.35 | 0.886 | 0.707 | 0.570 | 0.694 | 0.743 | 0.552 |\n | geom w. inner product | 37.71 | 0.886 | **0.717** | **0.579** | 0.689 | 0.734 | 0.542 |\n | Ours | 27.07 | **0.887** | 0.714 | 0.575 | **0.697** | **0.770** | **0.577** |\n \n We can observe that:\n \n - If we use edge-weight summation to process topo-edges, time is marginally saved but performance gets worse.\n - If we use the inner product to process geom-edges, time cost will increase with no significant improvement.\n3. Apart from logic synthesis and placement, the proposed work can be potentially used to predict timing in routing stage or Clock Tree Synthesis (CTS) outcomes like clock power, max skews, etc. Some prior works have attempted to use machine learning-based methods for these tasks [4][5][6]. However, in data flow graphs in high level synthesis, the nodes represent computation functions and the directed edges represent data path, which are different from the meaning of nodes and edges in a circuit netlist. The method proposed in our work might not apply to data flow graph.\n4. Directly encoding the cell positions as features leads to very bad generalization because raw 3D positions do not satisfy translation and rotation invariances [3]. Here are some experiments: (Ours (w/o. geom.) (pos. encode) is the modification we made which encodes the cell positions into node features instead.)\n \n \n | | time | pearson (N) | spearman (N) | kendall (N) | pearson (G) | spearman (G) | kendall (G) |\n | --- | --- | --- | --- | --- | --- | --- | --- |\n | GAT | 13.90 | 0.777 | 0.267 | 0.200 | 0.215 | 0.399 | 0.280 |\n | GAT (pos. encode) | 16.21 | 0.777 | 0.263 | 0.197 | 0.210 | 0.397 | 0.279 |\n | Ours (w/o. geom.) | 21.62 | 0.779 | 0.289 | 0.217 | 0.315 | 0.468 | 0.329 |\n | Ours (w/o. geom.) (pos. encode) | 22.55 | 0.766 | 0.328 | 0.292 | 0.228 | 0.475 | 0.411 |\n | Ours | 27.07 | **0.887** | **0.714** | **0.575** | **0.697** | **0.770** | **0.577** |\n \n We can see that directly encoding the cell positions cannot improve overall performance, while geom-edges do.\n \n\n## Acknowledgment\n\nThank you again for your constructive suggestions, which help us find some points we didn’t explain clearly. We will get them clarified in our later version.\n\n[1] Ghose et al. Generalizable Cross-Graph Embedding for GNN-based Congestion Prediction. In ICCAD 2021\n\n[2] Bustany et al. ISPD 2015 Benchmarks with Fence Regions and Routing Blockages for Detailed-Routing-Driven Placement.\n\n[3] Yang et al. Deep Molecular Representation Learning via Fusing Physical and Chemical Information. In NeurIPS 2021\n\n[4] Barboza et al. Machine Learning-Based Pre-Routing Timing Prediction with Reduced Pessimism. In DAC 2019.\n\n[5] Yang et al. Pre-Routing Path Delay Estimation Based on Transformer and Residual Framework. In ASP-DAC 2022.\n\n[6] Lu et al. GAN-CTS: A Generative Adversarial Framework for Clock Tree Prediction and Optimization. In ICCAD 2019.\n\n[7] Xie et al. Net^2^: A Graph Attention Network Method Customized for Pre-Placement Net Length Estimation. In ASPDAC 2021\n\n[8] Dai et al. NCTU-GR: Efficient Simulated Evolution-Based Rerouting and Congestion-Relaxed Layer Assignment on 3-D Global Routing. In TVLSI 2010\n\n[9] Zhang et al. Heterogeneous graph neural network. In KDD 2019", " ## About your questions 3-5\n\n3. On the one hand, there are usually more geom-edges than topo-edges in Circuit Graph (3.4M geom-edges and 1.9M topo-edges in *superblue19*[2]), so we use edge-weight summation rather than inner product, which is $F_{\\cal U}$ (hidden dimension of net) times more expansive in computation. On the other hand, it is also reasonable for geom-edges to use edge-weight summation because geometrically closer cells have a stronger relationship. Still, we test the performance when topo-edges use edge-weight summation or geom-edges use inner product: (**N** refers to Node-level and **G** refers to Grid-level)\n \n \n | | time | pearson (N) | spearman (N) | kendall (N) | pearson (G) | spearman (G) | kendall (G) |\n | --- | --- | --- | --- | --- | --- | --- | --- |\n | topo w. edge-weight summation | 25.35 | 0.886 | 0.707 | 0.570 | 0.694 | 0.743 | 0.552 |\n | geom w. inner product | 37.71 | 0.886 | **0.717** | **0.579** | 0.689 | 0.734 | 0.542 |\n | Ours | 27.07 | **0.887** | 0.714 | 0.575 | **0.697** | **0.770** | **0.577** |\n \n We can see that:\n \n - If we use edge-weight summation to process topo-edges, time is marginally saved but performance gets a bit worse.\n - If we use the inner product to process geom-edges, time cost will increase with no significant improvement.\n4. We hope to keep most of the informative values when fusing the topological and geometrical information, while sum-pooling and mean-pooling may revise them. Concatenation is not considered because we hope to keep the same hidden dimension in each layer. The results below show that using sum-pooling and mean-pooling has worse spearman (G) & kendall (G) and only marginal improvement in other metrics: (**N** refers to Node-level and **G** refers to Grid-level)\n \n \n | | time | pearson (N) | spearman (N) | kendall (N) | pearson (G) | spearman (G) | kendall (G) |\n | --- | --- | --- | --- | --- | --- | --- | --- |\n | Ours (sum pool) | 29.00 | 0.887 | **0.717** | **0.580** | **0.699** | 0.756 | 0.564 |\n | Ours (mean pool) | 29.38 | **0.888** | 0.715 | 0.577 | 0.697 | 0.755 | 0.563 |\n | Ours | 27.07 | 0.887 | 0.714 | 0.575 | 0.697 | **0.770** | **0.577** |\n5. We concatenate the raw features to enrich the representations. The results below show that excluding raw features only causes a marginal performance drop: (**N** refers to Node-level and **G** refers to Grid-level)\n \n \n | | time | pearson (N) | spearman (N) | kendall (N) | pearson (G) | spearman (G) | kendall (G) |\n | --- | --- | --- | --- | --- | --- | --- | --- |\n | Ours (w/o. raw feat.) | 27.45 | **0.892** | 0.713 | 0.574 | **0.697** | 0.759 | 0.567 |\n | Ours | 27.07 | 0.887 | **0.714** | **0.575** | **0.697** | **0.770** | **0.577** |\n\n[1] Bustany et al. ISPD 2015 Benchmarks with Fence Regions and Routing Blockages for Detailed-Routing-Driven Placement.\n\n[2] Dai et al. NCTU-GR: Efficient Simulated Evolution-Based Rerouting and Congestion-Relaxed Layer Assignment on 3-D Global Routing. In TVLSI 2010\n", " Thank you for your inspiring comments. Here are some points we would like to clarify:\n\n## About the weakness you mentioned\n\n1. The necessity of topology and geometry’s cooperation is demonstrated both methodologically and experimentally:\n 1. Methodologically, topology and geometry jointly decide most circuit properties e.g. congestion and net length mentioned in this paper. Circuit congestion is mainly caused by crowdedly-placed cells (geometry) and dense nets (topology). Net length is determined by cells’ connectivity via nets (topology) and the distances among connected cells (geometry).\n 2. Experimentally, take Congestion Prediction as an example: (**N** refers to Node-level and **G** refers to Grid-level)\n \n \n | | time | pearson (N) | spearman (N) | kendall (N) | pearson (G) | spearman (G) | kendall (G) |\n | --- | --- | --- | --- | --- | --- | --- | --- |\n | Ours (w/o. geom.) | 21.62 | 0.779 | 0.289 | 0.217 | 0.315 | 0.468 | 0.329 |\n | Ours (w/o. topo.) | 21.54 | 0.883 | 0.713 | 0.573 | 0.684 | 0.730 | 0.536 |\n | Ours | 27.07 | **0.887** | **0.714** | **0.575** | **0.697** | **0.770** | **0.577** |\n \n Although Congestion Prediction in the placement stage result mainly depends on the geometry, integrating both topology and geometry can yield even better performance.\n \n \n Therefore, a joint model is needed to capture deep interaction between topology and geometry and generate informative representations for downstream tasks.\n \n\n## About your q**uestions:**\n\n1. Some details of our model and baselines are listed below:\n\nCongestion Prediction \n\n| Model | # parameter | Inference Time (s) | Topology-aware | Geometry-aware | Compatiblity to logic/placement |\n| --- | --- | --- | --- | --- | --- |\n| GCN | 205K | 2.74 | yes | no | logic |\n| GraphSAGE | 204K | 2.69 | yes | no | logic |\n| GAT | 205K | 3.18 | yes | no | logic |\n| CongestionNet | 280K | 2.99 | yes | no | logic |\n| pix2pix | 992K | 0.35 | no | yes | placement |\n| LHNN | 54K | 65.21 | yes | yes | placement |\n| Circuit GNN | 480K | 4.09 | yes | yes | both |\n\nNet Length Prediction\n\n| Model | # parameter | Inference Time (s) | Topology-aware | Geometry-aware | Compatiblity to logic/placement |\n| --- | --- | --- | --- | --- | --- |\n| MLP | 4K | 0.60 | no | no | neither |\n| Net^2f^ | 13K | 1.13 | yes | no | logic |\n| Net^2a^ | 39K | 2.39 | yes | no | logic |\n| LHNN | 54K | 21.41 | yes | yes | placement |\n| Circuit GNN | 694K | 3.51 | yes | yes | both |\n\nNote that **Inference Time** is evaluated on *superblue19*[1] (with 496045 cells, 515951 nets and 1912420 pins).\n\n[1] Bustany et al. ISPD 2015 Benchmarks with Fence Regions and Routing Blockages for Detailed-Routing-Driven Placement.", " This paper proposed a novel graph representation: Circuit Graph, integrating the heterogeneous circuit information from logic synthesis and placement to facilitate the EDA design process. The proposed graph structure considers both topological (cell connection in the netlist) and geometric information (positioning of the standard cells on the layout). A corresponding graph neural network (GNN) structure is proposed for extracting circuit representation for various downstream tasks. The experimental results demonstrated the effectiveness of the graph in congestion and net wirelength prediction tasks with efficient NN computation. Strengths:\n1. Heterogeneous information fusion across multiple EDA design stages. Typically, circuit designs are divided into multiple phases. Each phase may have its own unique representation for the same underlying circuit. The proposed circuit graph brings two representations (netlist and cell placement) into a unified graph representation, which provided a more informative data structure embedding knowledge from multiple EDA design phases.\n1. The proposed circuit graph is general enough to be extended to inspire future work. The paper only touches on congestion and net wirelength prediction tasks for detailed routing, and the graph featurization contains only related basic topology information and simple geometric information. The reviewer believes the proposed graph can inspire more work in EDA areas. For example, by adding standard cell delay as one new feature in the cell node, the proposed graph may also help with the timing analysis of the circuit.\n1. The overall GNN structure follows the design of the circuit graph, which sounds promising. The topological and geometric message passing structures preserve the structure of the original circuit graph, \n\nWeaknesses:\n1. The paper didn't touch on how representative the extracted GNN features are. The two tasks (congestion prediction and net wirelength prediction) in the paper are experimented independently. Although these two tasks have different readouts, they shared the same input graph features and extract GNN feature representation. It would be interesting to check if the knowledge can be transferred from one task to another using the proposed GNN.\n1. Although the overall GNN structure sounds promising, some detailed formulation or design choice of GNN needs to be further justified. Detailed comments are made in the questions. 1. The reviewer wonders if the authors can make comments about how transferrable the proposed GNN features are.\n1. The authors claimed the proposed GNN is very efficient in terms of computation. To be used in practical cases, the time consumed in training and inference of GNN should be compared with the traditional EDA routing tool. Some simple statistics over this can help justify the actual impact of the usage in EDA.\n1. Detailed GNN formulation question 1): In equation 5, a matrix $W_{\\varepsilon^T\\rightarrow\\mathcal{U}}$ is used in topological message passing, and in equation 7, a vector $a$ is used in the geometric message passing. What is the consideration underneath the difference here?\n1. Detailed GNN formulation question 2): To fuse the topological and geometrical messages, maxpooling is used (equation 3). What is the underlying thought process to use maxpooling for fusion, and what may be the effect of other fusion methods (e.g. adding, concatenation, averagepooling)?\n1. Detailed GNN formulation question 3): The readout for cell and net also include the raw features. The reviewer wonders how important those raw features are. If excluding them from the readout results in significant performance drops, the usefulness of the extracted GNN representation should be questioned. The paper mentioned that one limitation is to test the proposed method under commercial products and more complex scenarios. The reviewer appreciate that the authors bring up this and understand the difficulty behind it.", " This work constructs a modeling framework that aims to solve various problems in the circuit design process. This work incorporates \n1. A novel circuit graph that is able to jointly integrate the topological and geometrical information and is claimed to be the first unified circuit representation approach that can be easily compatible across EDA tasks and stage.\n2. A novel message-passing paradigm, CircuitGNN, that is tailored towards the aforementioned graph dataset structure. The structure can conduct message-passing on both topological and geometrical edges distinctively and then fuse the messages to update cells and nets representations\n3. Extensive experiments validates the merits of the proposed methods in terms of both the task accuracy and execution speeds. Strength:\n1. This work does a good job on analyzing and illustrating the tasks and problems of circuit EDA in light of the machine learning methods.\n2. The methodology is described in much detailed but straight-forward way.\n3. Overall this work provides decent improvements over the existing methods. Just per the results alone, it is impressive.\n4. The code provided in the supplementary materials is certainly a plus, contributing to the transparency and reproduction of the works in the fields.\n\nWeakness:\n1. Apart from the improvements on the message passing methods, one of the key contributions of this work is to be able to jointly integrate the topo and geom information in one model. However, I do not see clearly the motivation for this point from both the application and results perspectives. For actual application, is there a significant disadvantage of simply using two sets of models or even methods respectively for logical synthesis and place-and-routing? One of the reasons, I would perceive, is that a joint model may yield a better task performance due to the complementary information. However, as Table 2 suggests, the joint model's improvements against the proposed method with only geom message passing. 1. Would be great to provide more summarized details on the proposed models and the baselines in terms of the number of parameters and operations and their types. Yes, it's addressed.", " The authors propose a unified way to construct graphs in different phases of EDA flow, and develop a general GNN model for downstream EDA tasks. Specifically, the proposed approach first constructs a heterogeneous graph by incorporating cell-cell connections (geometrical information) and cell-net connections (topological information). The node and edge features are generated based on physical properties of cells, pins, and nets. Then, a circuit GNN model is proposed to apply message passing on cell-cell and cell-net connections separately, which produces the representations of cells and nets for downstream tasks. The experimental results show that the proposed method increases 16.7% accuracy on congestion prediction and reduces 16.9% error on wirelength prediction. **Key Strength**\n\n - The paper is clearly written. All the technical steps are easy to follow.\n - The proposed method can be used to solve multi-stage EDA tasks.\n \n\n**Key Weakness**\n\nAlthough the proposed circuit graph construction and GNN model are all reasonable, they lack some technical significance. For example,\n - For circuit graph construction, it is straightforward to construct a bipartite graph based on cell-net connections from netlist, in order to produce representations of cells and nets for downstream tasks. Hence, the contribution is limited for the graph construction, especially in logic synthesis stage where placement information is not available.\n - For GNN model, it is a common way to apply message passing individually per edge type for handling heterogeneous graphs (e.g., [2]). Thus, the novelty of the proposed model is limited.\n \n\nAlthough the experiments show promising accuracy gains for downstream EDA tasks, further clarification could make the improvements more convincing:\n - Missing strong GNN baselines: The chosen baselines (i.e., GCN, GraphSAGE, and GAT) only consider node features. Since edge features are important in this paper, authors should compare the proposed model against stronger baselines (e.g., MPNN[1]) that incorporate edge features, on the same input graph. Without a stronger baseline, the contribution of the proposed GNN model is unclear.\n - Not tuning hyperparameters for baselines: Authors choose the default hyperparameters for baselines from their original papers. Since the datasets used in those papers (e.g., [3]) are different from this paper, hyperparameter tuning is necessary.\n - Not comparing against DREAMPlace: The purpose of wirelength prediction is to speedup EDA design closure. Nonetheless, there are no results of the runtime comparison between the proposed model and the placement method DREAMPlace, which is a very fast placement method by exploiting GPUs. Without this comparison, it's unclear about the motivation of wirelength prediction in placement.\n\n[1]: Gilmer et al. \"Neural message passing for quantum chemistry.\" ICML'17. \\\n[2]: Zhang et al. \"Heterogeneous graph neural network.\" KDD'19. \\\n[3]: Xie et al. \"Pre-Placement Net Length and Timing Estimation by Customized Graph Neural Network.\" TCAD'22. - For the wirelength prediction task in placement stage, authors use DREAMPlace to generate both the ground truth and node/edge features. If the ground truth is already known, why does it even need to generate features for GNN?\n - Is there any reason/insight why two different message-passing functions are used for topo-edges (Equation 5) and geom-edges (Equation 7)?\n - Apart from logic synthesis and placement, how does the proposed work incorporate graphs in other EDA stages (e.g., data-flow graph in high-level synthesis)?\n - Is it necessary to have geom-edges? Can we encode the cell positions into node feature vectors instead? Thanks authors for mentioning potential limitations of this work. One key challenge of deploying ML models into commercial EDA tools is the model generalizability. Authors can evaluate the trained model on more unseen designs to see if it is truly generalizable." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "2N6eK2vaYBe", "aviiJRLS0tO", "zT9YofH_Vp", "nips_2022_nax3ATLrovW", "TO75D-31Eqo", "KvnG-JaMxs6", "6AY9psURFST", "fxG-3WqEPBB", "fxG-3WqEPBB", "fxG-3WqEPBB", "6AY9psURFST", "s-NDb9krlhC", "nips_2022_nax3ATLrovW", "nips_2022_nax3ATLrovW", "nips_2022_nax3ATLrovW" ]
nips_2022_tvwkeAIcRP8
S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint
In this paper, we address the "dual problem" of multi-view scene reconstruction in which we utilize single-view images captured under different point lights to learn a neural scene representation. Different from existing single-view methods which can only recover a 2.5D scene representation (i.e., a normal / depth map for the visible surface), our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene. Instead of relying on multi-view photo-consistency, our method exploits two information-rich monocular cues, namely shading and shadow, to infer scene geometry. Experiments on multiple challenging datasets show that our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images. Thanks to the neural reflectance field representation, our method is robust to depth discontinuities. It supports applications like novel-view synthesis and relighting. Our code and model can be found at https://ywq.github.io/s3nerf.
Accept
This paper had reviews ranging from borderline reject to strong accept. The most negative reviewer had concerns about the assumptions in the framework (point light sources), and the loss of accuracy as the number of light sources decreases, but the remaining reviewers were compelled by the ability to hand scenes with lights not at infinity and the integration of the shadow constraints to give constraints on the structures of scene parts not directly viewed. Overall I agree with the three positive reviewers that this paper considers an interesting variation of the photometric stereo problem with coherent experimental evaluation that shows the contributions of each of the different pieces of their overall system Therefore I accept this paper.
test
[ "KrAGjdkvUVN", "zYUIrVMR1dL", "2ACPOTAwWo", "hlAWNz9DUj3", "ehDjzGqzptI", "xZfN-ACVRlP", "jNDKVOzBQsy", "IT2lRNgm7dU", "X582xS8_9W9", "MgUt6rLteME", "q-yf-NXnrMH", "Fo98ntBG61r", "fqNX1xZ9ro_", "kxgRqQZXY96", "N6NgMgnZjY3", "X9Dek14yHtp", "GqkNEe-O7U", "aCxLvsYrVkh" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer bpE7,\n\nWe just noticed that our previous responses posted under your Acknowledgement thread (titled “Author Rebuttal Acknowledgement by Paper1398 Reviewer bpE7”) are invisible to you and other reviewers. We therefore attach our responses again under this original thread for your reference. Sorry for the inconvenience caused. Thanks!\n\n&nbsp;\n\n--------\n&nbsp;\n\n\nThank you very much for your previous comments that helped us improve this\nmanuscript. We have revised the paper as suggested (highlighted in blue font). The following summarizes the modifications.\n- We have included the discussion for the demands of our setup and the challenges that might occur in a more complicated scene in limitations in Sec. 5.\n- We have added the nearest input image in Fig. 4 of the main paper. \n- We have included experiments on the real data in the main paper in Sec. 4.3.\n- We have added other experiments to the supplementary materials.\n- Implementation details are moved to the supplementary materials (Sec. C) due to page limitation.\n\nWe would like to emphasize again that our method tackles the problem of photometric stereo [A], and the setup we proposed is already more practical than existing photometric stereo methods [B,C]. Thus, it might be **unfair** to simply consider our work \"The task can be unnecessarily challenging and ambiguous\".\n\nWe have released the data of all real scenes and one synthetic scene, as well as the demo evaluation code in https://github.com/neuralps3d/neuralps3d. The full code and datasets will be released upon acceptance. \n\nWe hope that our response has addressed your concerns and turned you to be positive about the paper. Please feel free to let us know if you have any further concerns or comments. Thanks!\n\n[A] A Benchmark Dataset and Evaluation for Non-Lambertian and Uncalibrated Photometric Stereo, TPAMI 2019 \n[B] GPS-Net: Graph-based Photometric Stereo Network, NeurIPS 2020 \n[C] Neural Reflectance for Shape Recovery with Shadow Handling, CVPR 2022\n", " Thanks again for the positive comments and insightful suggestions.\n\nWe have added the analysis of normal smoothness loss to the supplementary materials (Sec. G). Besides, we also included the experiments on the real data in the main paper in Sec. 4.3.\n\nWe also release the data of all real scenes and one synthetic scene, as well as the demo evaluation code in https://github.com/neuralps3d/neuralps3d. The full code and datasets will be released upon acceptance.\n\nWe sincerely look forward to your further feedback. Thanks!", " Thanks again for the overall positive comments and insightful suggestions.\n\nWe have added the analysis of complicated background (Sec. E.2) and effect of light distribution on invisible regions (Sec. E.6) to the supplementary materials. Besides, we also included the experiments on the real data in the main paper in Sec. 4.3.\n\nWe also release the data of all real scenes and one synthetic scene, as well as the demo evaluation code in https://github.com/neuralps3d/neuralps3d. The full code and datasets will be released upon acceptance.\n\nWe sincerely look forward to your further feedback. Thanks!", " Thanks for your positive feedback, which has led to a substantial improvement of our paper's quality! According to your comments and suggestions, we have added the additional results in the revised paper (highlighted in blue font). The following summarizes the modifications.\n- We have updated Fig. 2 and its caption and clarified the $C_v$ in Sec. 3.4. \n- We have included the experiments on real data in the main paper in Sec. 4.3. \n- We have added other additional experiments in the supplementary materials.\n- Due to the page limit, we moved the implementation details to supplementary materials (Sec. C). \n\nAs promised in the abstract of our paper, we will release the code and dataset upon acceptance. In this stage, to help the reviewers better understand our method, we release the data of all real scenes and one synthetic scene, as well as the demo evaluation code in https://github.com/neuralps3d/neuralps3d. The full code and datasets will be released upon acceptance. Please kindly note that all the reviews and discussions on the OpenReview system will be made **public** after the paper gets accepted. We will keep our commitment.\n\nPlease feel free to let us know if you have any further concerns or comments. Thanks!", " Thanks again for all reviewers' constructive comments. We have modified our paper according to reviewers' suggestions in the revised version, including\n- update Fig. 2 and its caption [13bh] and Fig. 4 [bpE7],\n- clarify $C_v$ in Sec. 3.4 [13bh],\n- include experiments on real data in the main paper in Sec. 4.3 [13bh,H2aV,bpE7],\n- include the discussion of the demands of our setup and the challenges that might occur in a more complicated scene in limitations in Sec. 5 [bpE7],\n- add other experiments and move the implementation details to supplementary materials (Sec. C),\n - analysis on complicated background (Sec. E.2) [13bh,H2aV,bpE7],\n - analysis on shadow modeling in foreground and background regions (Sec. E.3) [13bh],\n - compare with MLP regression for shadow computation (Sec. E.4) [bpE7],\n - effect of area light (Sec. E.5) [bpE7],\n - effect of light distribution for invisible shape reconstruction (Sec. E.6) [H2aV],\n - effect of normal smoothness loss (Sec. E.7) [nudb],\n - more details of real dataset (Sec. G) [13bh,H2aV,bpE7].\n\n\nWe also release the data of all real scenes and one synthetic scene, as well as the **demo evaluation code** in https://github.com/neuralps3d/neuralps3d. The full code and datasets will be released upon acceptance. We hope our code and dataset can benefit future research in this direction. \n\nPlease feel free to let us know if you have any further concerns or comments. Thanks!", " My major concern about \"*how the proposed method contributes to the community*\" is adequately addressed by the additional experiments provided in the rebuttal. These additional results are crucial, and the authors should add them to the main paper as primary supporting results. \n\nBesides, I would be more convinced if the author could **provide the code and data** for me to reproduce the results. I hope the authors could **release** their data and code if the paper got accepted as they promised. I was the reviewer of an ICCV 2021 PS paper, and the authors of that paper promised in their rebuttal to release their code once their paper was accepted. However, as far as I know, that paper's code is still unavailable. \n\nAt this time, I am leaning toward increasing my rating to acceptance. ", " Dear AC and all reviewers:\n\nThanks again for all of your constructive suggestions, which have helped us improved the quality and clarity of the paper!\n\nSince the discussion phase has started for over three days, we have not heard any post-rebuttal response yet.\n\nPlease don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. Thanks!", " \nThank you for the positive comments and insightful suggestions.\n\n**_Q1\"The authors studied the contribution of different components (shading and shadow) of their method. But the impact of different loss terms was not included. Specially, at L171, how the value of for normal smoothness loss is found is not illustrated.\"_** \n**A1:** Following UNISURF, we apply the normal smoothness loss to promote the surface smoothness. We adopt the same loss weight as in UNISURF [38].\n\n\n**_Q2: \"How’s the normal smoothness loss affect your method? Is that similar to UNISURF? From your result, especially on LEGO, the loss of detail of studs and track seems very serious. Is that caused by normal smoothness loss or by more complicated light reflection at those areas?\"_** \n**A2:** Thanks for the insightful comment. The adopted normal smoothness loss is similar to UNISURF. Remember that the surfaces in the invisible region are primarily constrained by the shadow information, which is less constrained than the visible surface due to the lack of shading information. To further study the impact of the normal smoothness loss, we did an ablation study on the loss term. Results in https://sites.google.com/view/ps3d (Sec. H) show that imposing the normal smoothness loss is helpful to reduce the artifacts in the invisible regions.\n\nWe experimentally verified that the loss of details in LEGO is not caused by normal smoothness loss. We think it might be caused by two reasons. First, LEGO contains many complicated thin structures and the object region is relatively small in our data (in a region of about 200X200 pixels). Second, our method only observes the object from a single view, making thin structures less distinguishable. We will consider improving our method for thin structure modeling in the future.\n", " \n**_Q5: \"line 149: has the proposed shadow generation method been validated over direct regression method using an MLP?\"_** \n**A5:** Thanks for the suggestion. We trained a variant model replacing the ray-marching visibility computation with a direct visibility MLP. Results in https://sites.google.com/view/ps3d (Sec. E) show that simply regressing the visibility produces worse results, as this MLP cannot regularize the occupancy field. In contrast, our method performs ray-marching in the occupancy field to render shadow, providing strong constraints for the occupancy field.\n\n**_Q6: \"fig 4: should also show the nearest neighbor training samples\"_** \n**A6:** Thanks for the suggestion. Results in https://sites.google.com/view/ps3d (Sec. G) show the nearest training samples, and error maps of the nearest sample and our results. We can see that our method can accurately render cast shadows under novel lights. We will include the nearest training samples in Figure 4 in the revised version. \n\n\n**_Q7: \"The calibrated lighting is more demanding than calibrated camera pose... The lighting needs to be real directional light source, which is difficult to acquire in real-world scenarios.\"_** \n**A7:** Please kindly refer to the first response.\n\n\n**_Q8: It would be also valuable to discuss potential applications for the proposed setup and algorithm._** \n**A8:** First, as a photometric stereo method, our method can be applied to any applications that require photometric stereo technique. Second, our method can reconstruct the full shape and BRDFs of the object from single-view images captured under different lightings, and supports novel-view rendering and relighting. It is particularly helpful in scenarios where the camera cannot be moved (e.g., the surveillance camera). Last, the proposed algorithm (e.g., the efficient shadow modeling strategy) can potentially be integrated with multi-view methods for better shape reconstruction. ", " \nThank you for the constructive comments.\n\n**_Q1: The scope of the paper is quite limited, in terms of the problem setup and the practicality. The setup is demanding -- real directional light plus known lighting direction. This highly controlled lighting setup seems to only be feasible in a laboratory or a light stage setting._** \n**A1:** Note that our method assumes a near-point light, instead of directional light. It is true that the setup of photometric stereo is demanding in the past. However, recent deep lighting calibration methods can achieve robust calibration results by simply using the captured images of the target scene without any calibration tools [A,B], which significantly reduces the complexity of the capturing setup. \nIn our experiments on real data (see Sec. A in https://sites.google.com/view/ps3d), we simply use a fixed camera and a handheld cellphone flashlight as the light source. We use SDPS-Net [A] for initial lighting calibration, which is then jointly optimized with the model. Given only weakly calibrated lightings, our method achieves good reconstruction results for normal and shape on this challenging real dataset, clearly demonstrating the practicality of our method. \n[A] Self-calibrating deep photometric stereo networks, CVPR 2019 \n[B] Self-calibrating Photometric Stereo by Neural Inverse Rendering, ECCV 2022\n\n**_Q2: \"Cast shadow depends on the object geometry, the lighting, and also the surface the shadow is cast on. The geometry of the surface and the geometry of the object can be entangled to cause ambiguity. This is more severe in real-world examples.\"_** \n**A2:** We agree that the appearance of cast shadow also depends on the surface that shadow is cast on (denoted as background for simplicity). But note that the background geometry itself is also constrained by the shading cues that exist in the multi-light images, which will not introduce ambiguity to the problem. In fact, our results on scenes with multiple objects provide strong evidence for this argument (Figure 7 of the paper). We can treat one of the objects as the foreground, and the rest as the background, resulting in a highly-irregular \"background\". The results show that our method can faithfully reconstruct the object as well as the \"background\".\n\n**_Q3: The reconstructed geometry is not on par with multi-view settings, and degrades drastically as the number of lights decreases. My biggest concern about this paper is it makes the reconstruction task unnecessarily challenging and impractical._** \n**A3:** Note that the goal of this work is not to substitute the multi-view stereo methods. Instead, as a PS-based method, our work greatly advances the near-field photometric stereo methods (from 2.5D to 3D reconstruction). This is complementary to existing multi-view methods while only single-view images are required. \nIt is obvious that the performance of our method will decrease as the number of lights decreases, but a similar phenomenon also happens in multi-view methods when the number of views decreases. And according to our analysis on light numbers (Table 4 and Fig. 9 in main paper), our method can achieve comparable results even when only 8 lights are given. \nOur new experiments on real data show that the full shape of an object can be reconstructed from single-view images captured by a fixed camera and a moving handheld cellphone flashlight under a **casual** capturing setup, clearly demonstrating the practicality of our method.\n\n\n**_Q4: \"to make the setup slightly more practical, how well does the method perform when shadow is softer, i.e., slightly larger light source, or small area lights?\"_** \n**A4:** Thanks for the insightful comment. To analyze the effect of soft shadow caused by a larger light source, we tested our method on data rendered using light sources with different scales (i.e., a sphere with a radius of 1/50, 1/25, or 1/10 of the object size). Results in https://sites.google.com/view/ps3d (Sec. D) show that our method is robust to larger light sources (e.g., 1/50 and 1/25). We also observe that when the light source size is considerably large (e.g., 1/10), the results in the object boundary will decrease because of the heavy soft shadow. Note that this is not a problem in practice as it is very easy to find a point light source whose size is smaller than 1/25 of the object size (e.g., the cellphone flashlight). \n", " \nThank you for the positive comments and insightful suggestions.\n\n**_Q1: \"Each part of the proposed method is not novel. The method consists of three major parts: neural reflectance field representation follows [38]...the color and BRDF modeling follow [15,24]...the shadow computation is slightly different to [48], but it is similar to [24] where they both check the visibility of the surface point._** \n**A1:** The main contribution of our method is a NeRF-based framework for near-field photometric stereo which can reconstruct the *full* 3D shape of the object by jointly utilizing shading and shadow cues. To the best of our knowledge, we are the first near-field photometric stereo method that can reconstruct the invisible part of the scene. The technical novelty includes \"an online shadow modeling strategy for efficient shadow modeling\" and \"combining volume and surface rendering loss for better shape regularization\". We agree that the first two major parts are not very novel. However, our shadow modeling is largely different from [24], as [24] assumes a directional light and uses a depth map for shadow computation. In contrast, we assume a near-field point light and use occupancy field for shadow computation.\n\n**_Q2: \"It is suggested to evaluate this method in more challenging real and synthetic datasets.\"_** \n**A2:** Thanks for the good suggestion. To further evaluate the robustness of our method, we additionally tested our method on a synthetic dataset with more complicated backgrounds, and a challenging real dataset captured by ourselves (see Sec. A & B in https://sites.google.com/view/ps3d). The results show that our method is robust to backgrounds with different lightness and textures, and our method can achieve good results on real data even in a weakly calibrated setup (i.e., the lights are initialized as the prediction of a lighting estimation method SDPS-Net [7] and then jointly optimized with the model). \n\n**_Q3: \"I am curious to see how the number of lights affects the accuracy of the geometry in the invisible part of the scene...If lights fail to illuminate the invisible regions from different angles, how will the accuracy be affected?_** \n**A3:** We have discussed the effect of how the number of lights and the range of light distributions on the shape reconstruction in the paper. The normal MAE, depth L1, and side-view shape can be found in Table 4, Table 5, and Figure 9 of the paper. As it is not easy to isolate the invisible part given different objects have different shapes, we instead report the chamfer distance between the reconstructed and ground-truth meshes, which can quantify the full shape reconstruction. Results in https://sites.google.com/view/ps3d (Sec. F) show that the shape accuracy will improve given more lights, and our method is able to achieve robust results given 8 input lights. When the light distribution becomes narrow (small), the shape accuracy will decrease.\n", " \n**_Q7: \"There is no intuitive explanation for the role of modeling shadow for shape estimation. What is the problem aiming to solve by considering the shadow, the ambiguity, additional clue of global shape information, useful clue for depth estimation, or even light refinement\"_** \n**A7**: Our method explicitly makes use of this additional shadow cue for full shape reconstruction. Note that our method considers the classic problem of near-field photometric stereo, where the shading and shadow are two dominant cues for shape recovery. The shadows can provide strong information for the invisible region, which is analogy to shape from silhouette. Considering the light source as the camera viewpoint, shadow is formed when light rays are blocked by objects. If we view from the light source, the contour of the shadow is the same as the silhouette of the object on the image plane (light projects the contour of objects onto the scene, while objects are projected back onto the image plane).\n\n**_Q8: \"The evaluation is mainly conducted on the synthetic dataset built by the authors, which is less convincing. Although the paper shows the result of the LUCES dataset in the supplementary material, the depth estimation is not satisfactory (4.39 vs. 3.17 for the state-of-the-art). \"_** \n**A8:** Note that the experiment on LUCES does not aim to achieve state-of-the-art depth estimation performance but shows that our method is able to work on the **real data** for completeness. In fact, when testing on the LUCES dataset, our method degenerates to the *\"Ours w/o shadow\"* baseline, which is much worse than the full model (see Table 3 and Figure 6 of the paper). As mentioned in Line 47 of the supplementary material, the LUCES dataset is **not** suitable to evaluate the full potential of our method, as the shadow and shading information of the background regions cannot be observed. \nTo further evaluate our method, we evaluate our method on a real dataset captured by ourselves. Results in https://sites.google.com/view/ps3d (Sec. A) show that our method achieves **good normal** and **shape reconstruction** results on the **real** dataset given only roughly calibrated lightings, clearly demonstrating the effectiveness of our method.\n\n**_Q9: \"The compared methods, NeRF and UNISURF, are implemented in a naive way. It is unclear whether the authors have also sampled points along the light ray in those methods. If not, it can be expected that the reconstruction results are not good because NeRF and UNISURF are mainly applied in multi-view stereo...\"_** \n**A9**: We are not about to show the superiority over NeRF/UNISURF on single view reconstruction. Instead, we would like to demonstrate that directly applying these neural rendering methods or naively conditioning light on the radiance field does not work for single view reconstruction. Note that we compared three types of methods in the experiments, including the single-image depth estimation, near-field photometric stereo methods, and NeRF-based methods. We did not include the explicit shadow computation (mentioned as \"sample points along the light ray\") for the NeRF-Based methods, as the shadow modeling is one of our main technical contributions. In fact, the \"UNISURF+shadow\" method is similar to the baseline method \"Ours w/o shading\" in the ablation study (see Line 228 of the paper). Table 3 of the paper shows that simply integrating UNISURF and shadow modeling cannot produce good results, e.g., MAE comparison for the CHAIR object is \"Ours w/o shading\" (32.49) vs. \"Ours\" (1.93).\n\n\n**_Q10: This work has lots of pre-setting to the environment (the background color and the background complexity), which will limit the application scope of this method._** \n**_Q11: How well the proposed method can perform under a more complicated background._** \n**A10-11**: To further evaluate the robustness of our method on more complicated backgrounds, we evaluated it on four scenes rendered with different backgrounds, including two uniform color backgrounds with different lightness (denoted as 'Light' and 'Dark') and two textured backgrounds. Results in https://sites.google.com/view/ps3d (Sec. B) show that our method is robust to backgrounds with different lightness and textures. Moreover, our new real-world experiments show that the capturing setup can be very simple.", " Thank you for the constructive comments.\n\n**_Q1: No formula illustrates how $C_v$ in Eq.8 is calculated._** \n**A1:** Sorry for the confusion. The $C_v$ mentioned in Eq.8 is actually the $C(r)$ in Eq.7. We first derive volume rendering and later introduce surface rendering in Eq.10. We will clarify it in Sec. 3.4 in the revised version.\n\n**_Q2: In Figure 2, the same \"MLP\" block is shown in two separate branches._** \n**A2:** Thanks for the good suggestion. We will redraw Figure 2 in the revised version.\n\n**_Q3: \"How the points along the rays are sampled is unclear to me\"_** \n**A3:** Sorry for the confusion. The points along the camera ray are sampled following the manner of UNISURF [38]. For each ray, we first apply root-finding to locate the surface point, and then define an interval around the surface point to sample points. We use a relatively large interval for point sampling to make sure the full scene is well sampled. The term \"around the surface\" does not mean \"close to the surface\". We will update the caption in Figure 2 to avoid confusion.\n\n\n**_Q4: \"It is still unclear to me why the shadow becomes a strong constraint on the model that can substitute the constraint from multi-view photo-consistency\"_** \n**A4:** This paper tackles **the problem of near-field photometric stereo**, which is a longstanding and important problem in computer vision [46]. Our goal is **not to substitute** the multi-view photo-consistency with shadow constraints. Instead, considering a scenario where single-view images are captured under different point lights, we target utilizing both shading and shadow information for full 3D shape reconstruction. Note that the shadow cues can also be incorporated with the multi-view photo-consistency to improve the shape reconstruction.\n\n**_Q5: Analyze the contributions of cast shadows modeling in foreground and background separately._** \n**A5:** Thanks for the insightful comment. We have conducted extra experiments to analyze the effect of cast shadow modeling in both regions. Specifically, we trained two variant models, one without foreground shadow modeling and the other without background modeling. Results in https://sites.google.com/view/ps3d (Sec. C) show that modeling cast shadow in both regions is important, as disabling either one of them leads to decreased accuracy.\n\n**_Q6: \"Even the background is well reconstructed with little artifacts. However, the author didn't directly explain why their method is not sensitive to the background (it is almost perfect) and can reconstruct an accurate background simultaneously.\"_** \n**A6:** We do **not** claim that our method is better at reconstructing the background than other methods. There are possible two reasons why we got clean background results in the paper. First, the background in our synthetic dataset has a uniform texture with a regular shape, which is easier to recover. Second, our method makes use of the rich shading information that exists in images illuminated by multiple lights, providing strong regularization for the planar background.\n\n", " We sincerely appreciate all reviewers’ and ACs’ time and efforts in reviewing our paper. We thank all the reviewers for their recognization of our work on the following aspects.\n* **Problem Setup.** *\"interesting setup\"* [H2aV,bpE7];\n* **Model.** *\"a promising framework for near-field photometric stereo\"* [13bh], *\"shows the originality\"* [13bh];\n* **Experiments.** *\"state-of-the-art performance\"* [13bh,H2aV,nudb]; \n* **Wrting.** *\"well-written paper\"* [H2aV,nudb].\n\nAnd we also thank all reviewers for their insightful and constructive suggestions, which help a lot in further improving our paper. In addition to the pointwise responses below, we clarify our idea and contribution, and summarize the new experiments suggested by the reviewers. \n\n**Idea and Contribution**\n\nIn this work, we propose a NeRF-based method for near-field photometric stereo (PS). By explicitly utilizing single-view shading and shadow cues, our method is able to reconstruct **the full 3D shape** of the object, including the **visible** and **invisible** regions. It is not possible by existing PS methods as they can only recover a **2.5D scene** representation (i.e., normal or depth map) to describe the **visible** surface. \nNote that our method is based on PS settings, where multi-view information is not available. And the goal of our method is not to substitute the multi-view stereo methods. Instead, we propose to utilize both shading and shadow information in the scene to improve the current PS methods (from 2.5D to 3D shape reconstruction). Our work is complementary to existing multi-view methods while only single-view images are required. Moreover, our idea of jointly modeling shading and shadow cues is also potentially beneficial for multi-view reconstruction.\n\n**New Experiments**\n\nTo address the reviewers' questions and support our responses, we conduct the following experiments and put the results in the link: https://sites.google.com/view/ps3d. \n- Results on real data with casual imaging setup [13bh,H2aV,bpE7].\n- Results on more complicated synthetic data [13bh,H2aV,bpE7].\n- Effects of shadow modeling in foreground and background [13bh]. \n- Results on scenes illuminated by area light [bpE7].\n- Replacing ray-marching shadow computation with direct MLP regression [bpE7].\n- Effects of the surface normal smoothness loss [nudb].\n\n**Details for the Experiment on Real Data**\n- Casual capturing setting \nWe captured a real dataset using a fixed camera (the focal length is 28mm) and a handheld cellphone flashlight (see Sec. A in https://sites.google.com/view/ps3d). The object is put on the table and close to the wall. We turned off all the environmental light sources and only kept the flashlight on, which was randomly moved around to capture images illuminated under different light conditions. The captured dataset consists of 3 objects (around 70 images for each object).\n- Lighting calibration \nOur setup **does not** require manual calibration of lights. Instead, we applied the state-of-the-art self-calibrated photometric stereo network (SDPS-Net [7]) for light direction initialization, and roughly measured the camera-object distance as initialization of light-object distance. After initialization, the position and direction of lights are jointly optimized with the shape and BRDF during training.\n- Results \nWe tested our method on three objects and show their rerendered results as well as shape reconstruction results in https://sites.google.com/view/ps3d (Sec. A). Even with this **casual capturing** setup and **uncalibrated** lights, our method achieves satisfactory results in full 3D shape reconstruction.\n", " This paper proposed a promising framework for near-field photometric stereo utilizing volume and surface rendering. The proposed method is tested on a synthesis dataset and a real dataset. Their results have reached state-of-the-art. \n ---Originality---\n1) The overall idea shows the originality, especially in how to combine surface and volume rendering.\n2) Imposing the shadow's constraint on the occupancy field in the 3D space is enlighting. \n---Quality---\n1) Eq.8: the author used $C_v$ to denote the volume-rendered image and $C_s$ to indicate surface rendered image. However, no formula illustrates how $C_v$ is calculated, which confuses reading. \n2) Figure 2: two \"MLP\" blocks are shown in two separate branches, which makes me think there are two different MLPs estimating occupancy along the surface-to-light segment and the camera ray. However, according to Eq.6, the MLP for shadow handling is identical to the MLP for the occupancy field. Please try to redraw this part.\n---Clarity---\n1) Figure 2's caption, lines 2-3: How the points along the rays are sampled is unclear to me. The author mentions to sample $N_V$ points around the surface and $N_L$ points on the surface-to-light segment. I assume $N_L$ points are sampled uniformly, but without knowing the scene's scale, it is unclear how to ensure the sampled points along the camera ray are \"around the surface.\"\n2) Although the author tries to explain the necessity of the shadow clues for surface reconstruction in Section 4.4 and Section B in the supplementary material, it is still unclear to me why the shadow becomes a strong constraint on the model that can substitute the constraint from multi-view photo-consistency. There are two kinds of cast shadows in the synthesis dataset, including the foreground shadow on the object, and the background shadow. According to the results in section F, table S2, and section 4.2 table 2, the visibility of the background shadow is very important for depth estimation. The author should analyze the contribution of two different kinds of shadows separately.\n3) According to the presenting results, the proposed method has reached an ideal visual and quantitative result in their synthetic dataset. Even the background is well reconstructed with little artifacts. However, the author didn't directly explain why their method is not sensitive to the background (it is almost perfect) and can reconstruct an accurate background simultaneously. (e.g., UNISURF [A] applies NeRF++ [B] for the complex background and assumes black for the simple background. However, their reconstructed background still contains visible artifacts).\n\n[A] Michael Oechsle, Songyou Peng, and Andreas Geiger. UNISURF: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021\n\n[B] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv.org, 2020.\n\n---Significance---\n\n1) There is no intuitive explanation for the role of modeling shadow for shape estimation. What is the problem aiming to solve by considering the shadow, the ambiguity, additional clue of global shape information, useful clue for depth estimation, or even light refinement? \n2) The evaluation is mainly conducted on the synthetic dataset built by the authors, which is less convincing. Although the paper shows the result of the LUCES dataset in the supplementary material, the depth estimation is not satisfactory (4.39 vs. 3.17 for the state-of-the-art). This evaluation could not provide strong support to the proposed method.\n3) The compared methods, NeRF and UNISURF, are implemented in a naive way. It is unclear whether the authors have also sampled points along the light ray in those methods. If not, it can be expected that the reconstruction results are not good because NeRF and UNISURF are mainly applied in multi-view stereo, which is constrained by the multi-view photo consistency that is lacking in single-view photometric stereo. Therefore, using them for comparison is less convincing. \n4) As said before, the background shadow's visibility greatly influences the performance. Therefore, this work has lots of pre-setting to the environment (the background is better to be the light color, and the intensity should be bright enough), which will limit the application scope of this method. Moreover, The background's complexity may influence the background shadow's visibility and the shading of the scenes, which further impact the performance of the methods. While the background in the synthesis dataset and LUCES is still an ideal case, there is a lack of such discussions on a more realistic scenario.\n Except for the problems listed above, I wonder how well the proposed method can perform under a more complicated background.\n The authors have addressed parts of the limitations and potential negative social impact of their work. However, the impacts of the background's complexity and the shadow's visibility are not stated. \n", " This pape proposed a method that aims at solving the near-field photometric stereo via neural reflectance field representation. The core insight of this paper is to use both the shading and cast-shadows of the view as the cues to recover both the visible and invisible parts of a scene. \nThe paper adopts the NeRF-like neural field representation to represent the occupancy field, reflectance field of a scene. They then use lighting, BRDF modeling and shadow rendering to reconstruct the observed images. The networks are jointly optimized via volume rendering loss, smoothness loss and color loss. Strengths\n- The proposed method can estimate the geometry of the invisible region of the scene from a single-view point. The paper is interesting in estimating invisible regions from the shadows. And unlike previous shadow-based methods, this paper doesn't require explicitly shadow detection as input.\n- The method presents state-of-the-art geometry reconstruction results compare to prior near-field photometric stereo methods. \nEach component of the method, such as shadow and shading, is validated in the ablation studies.\n- The paper is well writen and easy to follow. \n\nWeaknesses\n- Each part of the proposed method is not novel. The method consists of three major parts: neural reflectance field representation follows [38] in occupancy field estimation and normal computation; the color and BRDF modeling follow [15,24] to use the sphere gaussian bases, SG weights and diffuse albedo; the shadow computation is slightly different to [48], but it is similar to [24] where they both check the visibility of the surface point. Hence, in the proposed method and rendering procedure of this paper, the novelty is very limited.\n\n- It is suggested to evaluate this method in more challenging real and synthetic datasets. The method was only tested on two datasets: a simple synthetic dataset and a real dataset. For the synthetic dataset: the background is clean and uniform; how will this paper perform in backgrounds with more textures?\n\n- I am curious to see how the number of lights affects the accuracy of the geometry in the invisible part of the scene. The only cues for invisible regions are the casted shadows on the ground. If lights fail to illuminate the invisible regions from different angles, how will the accuracy be affected?\n Please see the quesitons in the Weaknesses part. - Limitations are well discussed in the paper.\n- Suggestions: It will be great to isolate the invisible region of the scene and show quantitatively how the proposed shadow rendering improve the accuracy in these regions.", " This paper is about using neural radiance field to reconstruction the geometry of an object lit by varying directional light sources, leveraging cues of cast shadow. The optimization uses an efficient way to trace rays for shadow computation. The setup is interesting as in using shadow cues to reconstruct 3D geometry when multi-view captures are not available. The scope of the paper is quite limited, in terms of the problem setup and the practicality. The setup is demanding -- real directional light plus known lighting direction. This highly controlled lighting setup seems to only be feasible in a laboratory or a light stage setting. This may be the reason the paper only shows toy synthetic examples. Another 'contrived' setup is in the simple scene / background. Cast shadow depends on the object geometry, the lighting, and also the surface the shadow is cast on. The geometry of the surface and the geometry of the object can be entangled to cause ambiguity. This is more severe in real-world examples. The task can be unnecessarily (due to such setup being unlikely in real-world scenarios) challenging and ambiguous without any priors on the object geometry. The reconstructed geometry is not on par with multi-view settings, and degrades drastically as the number of lights decreases. My biggest concern about this paper is it makes the reconstruction task unnecessarily challenging and impractical. - to make the setup slightly more practical, how well does the method perform when shadow is softer, i.e., slightly larger light source, or small area lights?\n\n- line 149: has the proposed shadow generation method been validated over direct regression method using an MLP?\n\n- fig 4: should also show the nearest neighbor training samples The calibrated lighting is more demanding than calibrated camera pose, which can be easily acquired, even though not entirely accurate, on most modern smartphones. The lighting needs to be real directional light source, which is difficult to acquire in real-world scenarios.\n\nIt would be also valuable to discuss potential applications for the proposed setup and algorithm.", " This paper proposed a single-view scene reconstruction method under different point light conditions based on the neural reflectance field. It uses the inverse-square law and BRDF to model the shading and root finding to model the shadow. By explicitly utilizing both shading and shadow cues, the model can reconstruct the scene from single view, even for some invisible regions.\nThe authors evaluated the model with different objects and showed that the method can well reconstruct the geometry with images from single view but different point lights.\n ### Strengths\n\nA novel NeRF-based method that faithfully reconstructs a scene from a fixed viewpoint with multiple single point light conditions by explicitly modeling the shading and shadow. By carefully modeling the single point light shading by inverse-square law and BRDF, and shadow by root-finding. This method gives an excellent result.\nThe paper clearly illustrated the method in detail. The authors compared their method with other methods quantitatively and qualitatively. They also evaluated their model with multiple occluding objects, showing it can reconstruct the unseen area with shading and shadow cues.\n \n### Weakness\n\nThe authors studied the contribution of different components (shading and shadow) of their method. But the impact of different loss terms was not included. Specially, at L171, how the value of $\\alpha$ for normal smoothness loss is found is not illustrated.\n \n How’s the normal smoothness loss affect your method? Is that similar to UNISURF? From your result, especially on LEGO, the loss of detail of studs and track seems very serious. Is that caused by normal smoothness loss or by more complicated light reflection at those areas?\n Yes, the authors addressed different aspect of limitations of the work in the paper and supplementary materials. They also pointed out this method can recover some non-directly seen area, that may address some privacy issues. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "GqkNEe-O7U", "aCxLvsYrVkh", "X9Dek14yHtp", "xZfN-ACVRlP", "nips_2022_tvwkeAIcRP8", "fqNX1xZ9ro_", "nips_2022_tvwkeAIcRP8", "aCxLvsYrVkh", "GqkNEe-O7U", "GqkNEe-O7U", "X9Dek14yHtp", "N6NgMgnZjY3", "N6NgMgnZjY3", "nips_2022_tvwkeAIcRP8", "nips_2022_tvwkeAIcRP8", "nips_2022_tvwkeAIcRP8", "nips_2022_tvwkeAIcRP8", "nips_2022_tvwkeAIcRP8" ]
nips_2022_YCPmfirAcc
High-dimensional Additive Gaussian Processes under Monotonicity Constraints
We introduce an additive Gaussian process (GP) framework accounting for monotonicity constraints and scalable to high dimensions. Our contributions are threefold. First, we show that our framework enables to satisfy the constraints everywhere in the input space. We also show that more general componentwise linear inequality constraints can be handled similarly, such as componentwise convexity. Second, we propose the additive MaxMod algorithm for sequential dimension reduction. By sequentially maximizing a squared-norm criterion, MaxMod identifies the active input dimensions and refines the most important ones. This criterion can be computed explicitly at a linear cost. Finally, we provide open-source codes for our full framework. We demonstrate the performance and scalability of the methodology in several synthetic examples with hundreds of dimensions under monotonicity constraints as well as on a real-world flood application.
Accept
This paper deals with the problem of regression with an additive Gaussian process prior and a linear inequality constraint. A finite-dimensional approximation is proposed to the Gaussian process in terms of a linear combination of triangular basis functions with Gaussian weights. The weights are then estimated by solving a quadratic program or approximately sampled to handle the inequality constraints. Additionally, the authors consider the problem of variable selection and propose a forward selection method based on the difference between the posterior mode with and without the inclusion of a particular variable. The reviews were mixed, but are leaning towards acceptance.
train
[ "so_mI1tjErg", "89C0hnHwwZ", "d72qgfLVXV4i", "da1gfZzCbzY", "FyiNk69Mss2", "DwPeeaFW_Hp", "P7iEzuF9nTr", "O6wL2_cQrt", "_0q0KZmaf2b", "EmaobDsKNA", "02RgJJ48ULT", "DfktHssD1gT", "okrs-HKL2B9" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nThank you for updating your rating from \"weak accept\" to “accept”. We appreciate it. \n\nThe author(s)", " We are grateful to the reviewer for the response. We next provide replies to their two questions.\n\n- **Why is extending the proof in the prior reference [18] categorically different from the more classical statistical consistency analyses of van der Vaart, and the recent work on performance certificates for sparse approximations?**\n\n Let us explain first why the question of extending the proof in the prior reference [18] is categorically different from classical statistical consistency analyses, for instance in the book [Asymptotic statistics] by Van der Vaart. We point out that in [18], the data set of size $n$ is fixed, and the dimension of the approximation space increases. Thus, the arguments in [18] are deterministic, since the data set is fixed and there is no source of randomness. In contrast, the analysis in [Asymptotic statistics] is stochastic. We also remark that in [Asymptotic statistics], no chapters address monotonicity constraints (the only constraint addressed in the book is a unimodality constraint in density estimation). Finally, in the book [Asymptotic statistics], the number of observations $n$ goes to infinity, while in the case of [18] the data set (thus its size $n$) is fixed. Hence, the notion of consistency in [Asymptotic statistics] is different from the notion of convergence in [18].\n \n Our response is the same as above for recent works on performance certificates for sparse approximations. For instance, in the convergence result given in Corollary 6.1 of the book [Statistics for High-Dimensional Data, Bühlmann and van de Geer, 2011], the analysis is stochastic, and consistency is defined in the setting where both the data size $n$ and dimension $p$ go to infinity.\n\n- **What is it specifically about the \"15 pages of proofs\" from the prior reference [18] that make it difficult to carry over to the setting considered in this work?**\n \n As we have written in our previous response, we think that the convergence proof of [18] can be extended to our setting, but it is difficult to anticipate the time and space needed to write a such proof, even in the favorable case where no unexpected obstacles arise. Let us now present the specific technical challenges.\n\n In [18], the challenging point is to prove the convergence of a sequence $\\hat{Y}_m$, $m \\in \\mathbb{N}$, of finite-dimensional constrained GP mode functions from $F \\subset [0,1]^d$ to $\\mathbb{R}$. Here $F = F_1 \\times \\dots \\times F_d$ where $F_i \\subset [0,1]$ is the closure of the set of one-dimensional knots for variable $i$. In the convergence, the data set of size $n$ is fixed and the number of basis functions $m$ goes to infinity. The basis functions are from $F$ to $\\mathbb{R}$ and their $\\mathrm{span}$ should be dense in the set of continuous functions from $F$ to $\\mathbb{R}$.\n \n In our additive setting, if we were to extend the convergence proof of [18], the fact that our mode function is additive does not imply that we can address the case of each dimension separately. Indeed, there is only one single quadratic optimization problem for all the $d$ unidimensional functions of the additive mode (Equation 11, where the optimization problem is not separable, because of the posterior cross-covariances between the additive unidimensional GPs, Line 113). Hence, we anticipate that if we have $m_1,\\ldots,m_d$ knots for the $d$ dimensions, we would need to consider $m_1 + \\dots + m_d$ basis functions from $[0,1]^d \\to \\mathbb{R}$, with for instance the function number $1$ using only the first variable and the function number $m_1 + 1$ using only the second variable. Then, we would argue that the $\\mathrm{span}$ of these functions is dense in the set of additive functions from $F$ to $\\mathbb{R}$. Here, again, $F = F_1 \\times \\dots \\times F_d$ where $F_i \\subset [0,1]$ is the closure of the set of one-dimensional knots for variable $i$.\n \n Hence, the main difference with [18] is that we consider additive functions instead of general ones. This implies that we would need to see if all the arguments of [18] can be extended to this additive case. This includes, among others, defining the RKHS spaces for the limit additive function to the sequence of additive modes, defining an additive multilinear extension, defining the constraint and interpolation spaces for functions defined only on $F$ rather than on $[0,1]^d$, proving that the consequent extensions of $(H1,F)$ and $(H2,F)$ on page SM16 in the online supplement to [18] hold, and finally extending Lemma SM4.3 in the same supplement.\n \n Note finally that [18] extends the proof of Theorem 3 in [6], which addresses the case of non-additive mode functions from $[0,1]^d$ to $\\mathbb{R}$. We would thus also need to check that this proof can be extended to the case of additive functions from $[0,1]^d$ to $\\mathbb{R}$.\n \nWe will add a discussion to the manuscript on the prospect of obtaining a convergence proof, and the corresponding specific difficulties.", " The authors have addressed many of my concerns. However, the discussion of theoretical insights is lacking. What is it specifically about the \"15 pages of proofs\" from the prior reference that make it difficult to carry over to the setting considered in this work? Why is it categorically different from the more classical statistical consistency analyses of van der Vaart, and the recent work on performance certificates for sparse approximations? That this is unaddressed in the manuscript, even if there is no proof, seems a big hole to me. For this reason, I am disinclined to raise my score.", " The response has addressed my main concerns regarding the paper. I don’t have any significant concerns remaining and am happy to raise my rating from \"weak accept\" to “accept”. ", " We would like to thank all three reviewers again for their constructive feedback and the time spent reading the paper. In order to ease the reading of our individual responses, please find a short summary of them in this comment.\n\nThere were various requests for clarifications and discussions in specific locations in the paper, to which we tried to answer as precisely as possible. In addition, we noted, in particular, three comments on **(a)** our positioning compared to the recent literature, **(b)** theoretical convergence guarantees, and **(c)** potential additional numerical experiments.\n\n**(a)** In the response to Reviewer CWsg, we have highlighted that the key benefit of our work compared to existing references is that we are guaranteed to satisfy the constraints (for instance monotonicity) exactly and everywhere in the space. Several of the related references that are discussed in the response do not achieve this for monotonicity.\n\n**(b)** In the response to Reviewer CWsg, we have pointed out that several existing convergence guarantees actually apply to unconstrained GP regression, which presents fundamental differences to constrained GP regression as we study here (the posterior goes from explicit to non-explicit). Hence, these guarantees and their proofs are not applicable to our setting. A prospect of this paper is to prove the convergence of the suggested additive MaxMod algorithm, but this is expected to be a challenging extension of an existing proof in the non-additive setting (of about 15 pages).\n\n**(c)** In the response to Reviewer cQK3, we have added a new numerical experiment to explore the robustness to non-additivity, which will be added in the paper. In the response to Reviewer CWsg, we discuss the main challenge to obtaining numerical benchmarks with existing methods for regression with monotonicity constraints: there do not seem to be publically available codes. We will send personal requests for code, in the aim of obtaining numerical benchmarks. Nevertheless, independently of these benchmarks, we know that our method cannot be improved on the criterion of satisfying the inequality constraints, as it does so exactly and everywhere on the space by construction.\n\nIn the individual responses, we have listed the modifications to the paper that we are happy to commit to do. They seem very feasible to implement to us, given the additional content page for the camera-ready version, the possibility to add appendix content, and the time left for the camera-ready version.", " *(follow-up of Part 1)*\n\n**Robustness of MaxMod in the presence of model misspecification:**\nWe must remark that the underlying function in the flood application is indeed non-additive with weak interactions between the input variables (see further details in [25]).\n\nTo enrich the discussion on the robustness of MaxMod in the presence of a non-additive component, we performed a new experiment with a function given by\n$$\ny(\\textbf{x})=\\sum_{i=1}^{d}\\arctan\\left(5\\bigg[1-\\frac{i}{d+1}\\bigg] x_i\\right)+\\lambda x_{D-1}x_D,\n$$\nwith $d=3$ and $D=10$. $\\lambda\\geq0$ is a parameter that controls the influence of the non-additive contribution. Observe that, as $\\lambda$ increases, the influence of the input variables $x_{D-1},x_D$ also increases.\n\nAfter running the experiments for $\\lambda=0,0.5,1,1.5,1.7$ (values chosen so that the Sobol indices for the input dimensions $x_9$ and $x_{10}$ are smaller than 1/5), and for $n=10D$ (value also used in Table 2), we observed that MaxMod properly activates dimensions ($x_1,x_2,x_3,x_9,x_{10}$) in the first iterations while preserving accurate $Q^2$ values. For $\\lambda>2$, since the additive GP is not able to capture the non-additive behavior, the performance of MaxMod decreases. Next, we show a table containing our findings.\n\n|$\\lambda$|Sobol index $x_D$|active dimensions|knots per dimension|$Q^2$ [%]|\n|-|-|-|-|-|\n|0|1.7$\\times$10$^{-5}$|(2,1,3)|(5,5,3)|99.8|\n|0.5|0.02|(2,1,3,10,9)|(5,5,3,2,2)|99.2| \n|1|0.08|(1,2,3,9,10,5)|(5,4,3,2,2)|97.6| \n|1.5|0.15|(1,2,3,10,9)|(5,4,3,2,2)|95.5|\n|1.7|0.18|(1,2,10,3,9,5)|(5,4,2,3,2,2)|94.7|\n\nWe will reinforce the discussion on the non-additive nature of the underlying function of the flood application. We will also add a new appendix reporting the results of the aforementioned synthetic example.", " We thank the reviewer for the careful reading of the paper, the overall positive evaluation, and the constructive suggestions, in particular about the assessment of the MaxMod algorithm with model misspecification. Below are our responses to the reviewer's comments and questions. \n\n---\n\n**Minor changes:**\n- We corrected the typos pointed out by the reviewer and carefully double-checked the paper.\n- We changed the term \"information benefit\" to \"contribution benefit\".\n- We generated figures with 1D illustrations of the hat basis functions, and samples from a GP and a finite-dimensional GP (without constraints). We will add them in Section 3.1.\n- We agree that the intermediate equality in Eq. 10 is misleading, although mathematically correct. We decided to remove it. Indeed, it is not correct to compute a MAP estimate along each dimension separately, since the GP values across different dimensions have non-zero posterior covariances.\n\n**Novel contributions related to constrained GPs compared to [5,6]:**\nAlthough our main contributions are already summarized in the introduction, we agree that adding remarks throughout Section 3 to highlight the differences between our work and the ones in [5,6] will improve the discussion. Some of them will be related to: \n- The construction of the asymmetric hat basis functions (compared to the symmetric ones considered in [5,6]). It allows the MaxMod algorithm to insert knots promoting non-equispaced designs.\n- The novel additive kernel considering the non-centered hat basis functions.\n- The new prediction formulas for the additive case (see Eq. (11) and expressions below). We will highlight that our framework is not a simple application of the priors from [5,6] to each dimension since the values at the knots (called the Gaussian weights by the reviewer) at all the dimensions, conditioned to the observations and constraints, are mutually dependent. Thus, they need to be jointly estimated.\n\n**On the definition of the set of linear inequality constraints:**\nAs stated in Section 3.2 we consider componentwise constraints. Thus a multivariate function is non-decreasing iff, by definition, it is non-decreasing w.r.t. each component (lines 93-95). Indeed, the first equivalence at lines 96-97 is just a definition. We will clarify this in the text. \n\nEq. (8) is indeed more restrictive than having a general convex set $\\mathcal{C}_i$. Nevertheless, only (8) makes our implementation possible, yielding numerical optimization with a finite number of linear constraints. To make this clearer, in Section 3.2, we will directly consider a convex set $\\mathcal{C}_i$ of the form (8).\n\n**On the choice of LHD:**\nIn a preliminary experiment on the flood application, we indeed considered a random selection of the training sets. We observed similar results to the ones presented in the paper for $n>4d$. For $n=2d,3d$, the methodology led to higher variability in the order selection of the input variables and the $Q^2$ results. We still noted that MaxMod properly activated the most relevant dimensions in the first iterations of the algorithm.\n\nHence, using LHDs is not necessary for the implementation of our methodology. However, we recommend it when the user is able to design the experiments. Indeed, as mentioned at the beginning of Section 5, the theoretical benefits of LH sampling for additive functions have been demonstrated in [23]. In practice, as seen above for the flood application, using LHDs can reduce the variability in the order selection of the input variables and improve prediction performance. In the paper, we will enrich the discussion of the impact of the experimental design.\n\n*(to be continued in the next comment)*", " *(follow-up of Part 1)*\n\n**Numerical benchmark:**\nA comparison between the constrained finite-dimensional GP and the unconstrained fully dense GP is already studied in [5]. There, the experiments show that predictions are outperformed by the former when the response satisfies constraints. We remark that in unconstrained regression as in [Burt et al 2019, Koppel et al 2021], one can indeed compare the full GP with its approximation, while in constrained regression, the full GP cannot be implemented (even when $n$ is small) because there is an infinite number of constraints, for instance for monotonicity.\n\nFor a fair benchmark, we found two Bayesian works accounting for monotonicity over a finite set of points [Da Veiga and Marrel 2020; Riihimäki and Vehtari 2010], however, we were not able to make numerical comparisons. We only found codes for the second approach. We could not execute their codes either in R (via RcppOctave as suggested by the authors) or in Octave. The RcppOctave package seems obsolete since it was archived in 2017. We installed the latest checked version (2015) but we got an error related to the Octave configuration in both Windows and Ubuntu. We installed the GNU Octave but we got errors related to C compilers.\n\n[Koppel et al 2019] provide prediction functions that are close to being monotonic (among other constraints this reference tackles) so we could in principle make a numerical comparison with this work, but we have not found any mention of a publically available code in this reference.\n\nWe will contact the authors of the three above references with the aim of accessing an implementation of one of the methods, in order to make a numerical comparison with our work.", " We are grateful to the reviewer for the careful reading of the paper, for highlighting its innovative elements, for pointing out additional references to us, and for raising the important questions of numerical benchmarks and convergence guarantees. Below are our responses to the reviewer's comments and questions. \n\n---\n\n**Link with existing frequentist works:** We are aware of the link between our work, based on Bayesian regression, and frequentist ones based on the minimization in an RKHS of a penalized least squared criterion, under inequality constraints. It was evocated in [6] (Remark 4) and studied by X. Bay, L. Grammont and H. Maatouk in:\n\n(a) Generalization of the Kimeldorf-Wahba correspondence for constrained interpolation. EJS, 10(1) 1580-1595, 2016\n\n(b) Constrained Optimal Smoothing and Bayesian Estimation, hal-03282857, 2021\n\n[Koppel et al 2019] can fit the scope of (b) when the loss function is quadratic, as the set of functions $f$ s.t. $G(f)< 0$ is convex (Eq. (2) in this reference). It provides a useful computational method, although the constraints are not satisfied everywhere in the space, a key property of our work. We will add a paragraph to advertise the link Bayesian/frequentist.\n\n[Marteau-Ferey et al 2020] does not directly address the monotonicity and convexity constraints that we consider, although it mentions convexity as future work. We will also discuss this in the paper. \n\n**Link with other works on complexity reduction:**\n[Burt et al 2019, Koppel et al 2021] address GP regression, with no constraints and with $n$ large. There, the Gaussian posterior is explicit and the computational cost for computing posterior moments or sampling from the posterior is $O(n^3)$. Hence, there are no challenges when $n$ is not large (otherwise, the observations are replaced by a smaller number of inducing points).\n\nIn contrast, our constrained Gaussian posterior is not explicit. Hence, even when $n$ is not large, it is challenging to compute posterior moments or to sample from the posterior, due to the high dimensionality of the state space of MCMC procedures. Our work thus enables reducing this sampling problem dimension, by using additive functions that are parameterized by a minimal number of knots.\n\nHence, our work and the ones mentioned above address different sources of computational complexity, and thus, arguably, their merits cannot directly be compared. We will add explanations of this difference in scope between our work and the above ones.\n\n**On the theoretical convergence guarantees:**\n[Burt et al 2019, Koppel et al 2021] provide convergence guarantees as a function of the model complexity. Currently, we do not tackle this point. As discussed above, our framework is different from theirs, so their guarantees or proof techniques cannot be applied in our setting. We will add a discussion of the existing related guarantees of the above references.\n\nOur work extends the MaxMod algorithm [18] to the additive setting, for which a convergence guarantee exists as the number of iterations increases. It needs a proof of about 15 pages. We believe that this convergence also holds for the additive case. Nevertheless, its proof appears to yield additional challenges since the additive case involves, in particular, different function spaces and multi-dimensional basis functions. Thus, this proof is currently an open question, which we consider addressing in future work.\n\n**Is the squared norm approach sufficient?**\nThe main benefit of the squared norm is that it takes an explicit expression with a linear computational cost (Propositions 1 and 2). Using other metrics such as Hellinger and Wasserstein may not be as favorable computationally. In the non-additive setting [18], using the squared norm also enables to obtain the theoretical convergence of MaxMod, which we conjecture holds similarly in our setting, as discussed above. Hence, as things stand, we do not see limitations in using this norm in our setting.\n\n**On the guarantee of the constraints throughout MaxMod:**\nAs previously discussed, our work verifies the constraints everywhere and not only in a finite set of points as in [8], or only approximatively as in [Koppel et al 2019]. This results from the piecewise linear approximation (see the theoretical foundation in [6]). This guarantee holds throughout the application of MaxMod since the model preserves the piecewise linear property in each step of the algorithm. This explanation was already in the paper and we will highlight it more.\n\n*(to be continued in the next comment)*", " We are grateful to the reviewer for the careful reading of the paper, and for the overall positive evaluation. Below are our responses to the reviewer's comments and questions. \n\n---\n\n**On the estimation of the kernel parameters:**\nThe kernel parameters are estimated via standard maximum likelihood once the GP framework is established. This is an intermediate step before solving the optimization problem in Equation (11). By fixing the kernel parameters, then (11) is convex and can be solved via quadratic programming. In particular, we use the function \"solve.QP\" from the R package \"quadprog\" (reference [21] in the paper).\n\nAlthough we briefly detailed in the 2D illustration that the kernel parameters are estimated via maximum likelihood, we will add a more general sentence to avoid ambiguity. We will also clarify that (11) is convex, once the kernel parameters are fixed and estimated.\n\n**On the definition of the hat basis functions:**\nThe compact input domain $[0, 1]$ is considered to simplify theoretical statements and numerical implementations. In practice, one-dimensional bounded domains can be transformed into the domain $[0, 1]$ before applying the GP framework. Therefore, we argue that considering the domain $[0,1]$, for each of the input variables, does not represent a limitation on the applicability of the methodology.\n\nThe hat basis functions are chosen to ensure the constraints everywhere by imposing them only at the knots (see equivalence in (6)). This is a crucial property in applications where responses satisfy physical constraints such as monotonicity or convexity. To the best of our knowledge, the aforementioned property is not entirely fulfilled by many other Bayesian or frequentist approaches from the state-of-the-art, or by considering other types of basis functions in our framework (e.g. Gaussian functions).", " This work develops an algorithm for GP inference with constraints through a novel analytical update rule for an additive variant of GP inference. An active set technique for which model points to retain is developed based on Euclidean subspace projections. Numerical results illuminate the merits of the proposed approach.\n\n Strengths:\n\nThe algorithm appears novel to my knowledge, and addresses a fundamental and open problem in additive reparameterizations of Bayesian linear regression with Gaussian processes.\n\nThe 2d example, as well as the extended numerical evaluations, provide substantive evidence that the proposed technique works well In practice for imposing constraints in this setting.\n\n\nWeaknesses:\n\nThe authors should contrast their approach to imposing constraints through Bayesian regression with frequentist analogues. In particular, through a well-known link to kernel ridge regression, one can impose convex or linear constraints, which are very much related to the setting considered here. See, for instance:\n\nA. Koppel, K. Zhang, H. Zhu, and T. M. Basar. ”Projected Stochastic Primal-Dual Method\nfor Constrained Online Learning with Kernels” in IEEE Trans. Signal Processing, May. 2019.\n\nMarteau-Ferey, U., Bach, F., & Rudi, A. (2020). Non-parametric models for non-negative functions. Advances in neural information processing systems, 33, 12816-12826.\n\nAnd follow-on works.\n\nThe authors also consider a way to deal with complexity bottleneck of GP posterior inference through a sequential dimensionality reduction approach with the squared-norm. Substantial complexity reduction of computing an inference is achieved through this active set approach. What I wonder is how is this related or different from a large history on active set techniques for complexity reduction of GP inference. In particular, recent works have established specific convergence guarantees as a function of model complexity. How does the proposed technique contrast with these performance certificates? Is the squared-norm approach sufficient due to the additive linear nature, but beyond this situation one should consider metrics in distribution space (such as Hellinger or Wasserstein?)\n\nMcAllester DA (1999) \"Pac-bayesian model averaging.\" In: Proceedings of the twelfth annual conference on Computational learning theory, pp 164–170\n\nBurt D, Rasmussen CE, Van Der Wilk M (2019) \"Rates of convergence for sparse variational gaussian process regression.\" In: International Conference on Machine Learning, pp 862–871\n\nA. Koppel, H. Pradhan, and K. Rajawat. “Consistent Online Gaussian Process Regression\nWithout the Sample Complexity Bottleneck,” in Statistics and Computing, Springer, Sept.\n2021\n\nSuch a contrast is not carefully addressed in this manuscript. Specifically, see Table 1 in the last of the preceding list of references.\n\nThe main theoretical results are of a nature of providing analytical expressions for the update rules. While this is credible and interesting, additional justification in the context of what would/would not guarantee constraint satisfaction is missing.\n\nThe numerical results do not really consider strong benchmarks for evaluating the tradeoffs in the constraint satisfaction/model fitness achieved by this method as compared with some alternatives. Moreover, it is unclear how does the active set approach appropriately capture the right statistical properties of the fully dense GP. \n\n\n\nOverall opinion: \n\nThe algorithm is novel, and its update expressions combined with the active set approach to dimensionality reduction, ensures its efficiency. However, these aspects are only addressed heuristically, and no performance certificates are provided. Moreover, numerically no strong baseline is considered to determine whether the proposed technique actually achieves state of the art performance. For these reasons, the work is below the bar.\n\n See response to previous field. See response to previous field.", " The authors propose an additive Gaussian process with support for monotonicity constraints and scalable to high dimensions. In particular, a sequential dimension reduction algorithm is presented which can identify the active input dimensions. The author also provided illustrative results on real-life data.\n\n [Strengths]\n1. The paper is well-written. The proposed GP framework is an interesting generalization of additive GPs.\n2. The technical derivation looks solid, and the kernel of the proposed GP has a simple analytical form.\n3. The authors also validated the method on a Vienne river flood data where the proposed MaxMod algorithm correctly activated the relevant input variables.\n\n[Weaknesses]\n1. It's not clear how the kernel parameters are estimated. In Eq. (11), the objective is given by a constrained quadratic optimization. However, this objective could be nonconvex in terms of the kernel parameters. Are there efficient solvers?\n2. The proposed GP framework seems to rely on particular basis functions (Eq. (4)) which have domain [0, 1]. Does this domain impose limitations on the applicability of the GP framework? What are the desiderata for these basis functions?\n 1. Are the kernel parameters inferred jointly with other parameters in Eq. (11)?\n2. What are the desiderata for the basis functions Eq. (4)?\n N/A", " The authors consider the problem of regression tasks with an additive Gaussian process prior and linear inequality constraint. The authors propose a finite dimensional approximation to the Gaussian process as a linear combination of triangular basis functions with Gaussian weights. The weights are then estimated via solving a quadratic program (for MAP estimation) or approximately sampled with HMC to handle the inequality constraints. Additionally, the authors consider the problem of variable selection in this model, and propose a forward selection method based on the difference between the posterior mode with and without the inclusion of a particular variable (or knot). The authors demonstrate their method for inference and variable selection on synthetic data, that is generated to be additive and monotonic along certain dimensions. Finally, the authors apply the method to a flood dataset generated by a utility company via simulation. ### Strengths\n- The scope of the paper is well-defined and of interest to those modeling with Gaussian processes. Incorporating prior knowledge such as inequality constraints into these models seems to be an important task. \n- The forward selection algorithm (MaxMod) for selecting which dimensions to include seems appealing for high-dimensional datasets, and the experiments seem to suggest that it can be effective. \n- The writing is generally quite clear.\n\n### Weaknesses\n- I am concerned that the contribution regarding extension of existing work to handle high-dimensional, additive models is over-claimed. It looks to me like the method considered is very similar to the references [5,6], applying the prior works approach to each dimension (separately).If there is a major distinction that I have missed between the approaches, I would appreciate if the authors could highlight this more explicitly. Otherwise, I think this contribution should be contextualized a bit more clearly relative to existing work, particularly in section 3 where it isn't entirely clear what is new and what is a based directly on existing methods. 1. (This is a comment) I think it would be useful to include a figure showing a.) the hat basis functions, b.) a sample from a 1-dimensional Gaussian process prior with a given kernel (e.g. Matérn 5/2) as well as the sample (using the same realization of Gaussian noise) modified to the finite dimensional model (eqn 5) with a handful of knots.\n\n2. (Lines 96-97) You refer to a function as being monotonic on $[0,1]^d$. I don’t think this is a well-defined notion (since $[0,1]^d$ isn’t totally ordered). What do you mean by this? I assume you mean that restricted to each component it is monotonic, in which case the first equivalence you state is really just a definition. This should be made clear. \n\n3. Equation 8 seems more restrictive than the original formulation in which $\\mathcal{C}_i$ need only be convex. Are they somehow equivalent? If not, it would be best to initially introduce $\\mathcal{C}_i$ as a set defined by finitely many linear inequality to avoid confusion.\n\n4. Notationally, the first equality in (eqn 10) seems to suggest that the MAP estimate can be computed by computing a MAP estimate along each dimension separately and combining these. Is this what is intended? I don’t see why this would be true (and in equation 11 this does not seem to be the case). If it is, could you please give some explanation as to why? If not, consider changing the notation slightly.\n\n5. (Minor, line 116) “et” -> “and”\n\n6. (Minor) Is the information benefit directly tied to ideas from information theory? If not, it may be worth considering renaming.\n\n7. (Minor, Algorithm 1, 5) “D” -> “d”\n\n8. In the flood dataset experiments, you selected your training set via finding a subset as close as possible to a LHD (250-251). Would findings have differed if you had instead selected training data uniformly at random? Similarly, what if $n$ were somewhat larger than $2d$? In particular, is it important to the practical application of the method that the covariate design be something like a LHD? I didn’t see where an assumption like this was made in the discussion of the method, but it seems all of the experiments used this. \n\n9. How robust is the proposed MaxMod algorithm in the presence of model mis-specification? For example, if the function being modelled has non-additive components, can this lead to issues with variable selection? There is some discussion of limitations in the paper. I think it would be nice to see a synthetic experiment where the assumptions of the model are not precisely true (e.g. the function used to generate the data has some, perhaps small, additive component). I expect most real-world data is not exactly additive, and it would be nice to see how robust the proposed method is to small mis-specifications. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "da1gfZzCbzY", "d72qgfLVXV4i", "_0q0KZmaf2b", "DwPeeaFW_Hp", "nips_2022_YCPmfirAcc", "P7iEzuF9nTr", "okrs-HKL2B9", "_0q0KZmaf2b", "02RgJJ48ULT", "DfktHssD1gT", "nips_2022_YCPmfirAcc", "nips_2022_YCPmfirAcc", "nips_2022_YCPmfirAcc" ]
nips_2022_Qt4rKNYzcO
Enhanced Latent Space Blind Model for Real Image Denoising via Alternative Optimization
Motivated by the achievements in model-based methods and the advances in deep networks, we propose a novel enhanced latent space blind model based deep unfolding network, namely ScaoedNet, for complex real image denoising. It is derived by introducing latent space, noise information, and guidance constraint into the denoising cost function. A self-correction alternative optimization algorithm is proposed to split the novel cost function into three alternative subproblems, i.e., guidance representation (GR), degradation estimation (DE) and reconstruction (RE) subproblems. Finally, we implement the optimization process by a deep unfolding network consisting of GR, DE and RE networks. For higher performance of the DE network, a novel parameter-free noise feature adaptive enhancement (NFAE) layer is proposed. To synchronously and dynamically realize internal-external feature information mining in the RE network, a novel feature multi-modulation attention (FM2A) module is proposed. Our approach thereby leverages the advantages of deep learning, while also benefiting from the principled denoising provided by the classical model-based formulation. To the best of our knowledge, our enhanced latent space blind model, optimization scheme, NFAE and FM2A have not been reported in the previous literature. Experimental results show the promising performance of ScaoedNet on real image denoising. Code is available at https://github.com/chaoren88/ScaoedNet.
Accept
The paper under review introduces a deep unrolling network driven by a latent space blind model for image denoising. Although the network combines known components, it has novel elements and good algorithms, the experimental results are robust, the implementation details are rich, and the ablation research is extensive. Revisions and rebuttals addressed most of the reviewers' concerns, leading them to improve their scores. Therefore, I accept this paper.
train
[ "jle9pAo1lm_", "j7lHtgmyz51", "NlewbzwMfY-", "vefsm2Hwbs7T", "vRuSvI-gstg", "Z3UreSZ1tcZ", "war4AMBoEo_", "xUbYbiucJnI", "aVi0roC7IQG", "cs5Qu9yXAtE", "MEAoqydzf9m", "WBCHVg9lwU" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your useful and kind comment. We appreciate your positive comment about our method, and also appreciate that you raised the final rating. Thank you!\n\nFor the main reason of using LS, your understanding is correct. It is based on the proposed task formulation, which can break through the limitation of the conventional unfolding framework. We will add more analysis of LS in the final version. Thanks for your suggestion.", " Thank you so much for your detailed discussion of LS!\n\nYour explanation of the reason for using LS is clear enough. You differentiated the encoder $E$ from conventional convolution layers. In my understanding, the main reason of using LS is that the task formulation is based on it. Whether my understanding is correct?\n\nDue to your modeling of the image denoising task and your analysis, I decide to raise my final rating to WA. I suggest the authors to put more analysis of LS in your final version because I'm confused by the reason of emphasizing LS at first (If you have explained it in the paper and I missed it, I'm very sorry).", " Thank you for your encouraging response. We are very glad that the reviewer will raise the final rating.\n\n>**Most of the deep-learning based methods of low-level tasks usually use a convolutional layer to transform the input image to its corresponding feature. Can you differentiate your LS from the previous works?** \n\nOur paper focuses on the unfolding based real denoising, and thus we first analyze our method and other conventional unfolding based networks for low-level tasks.\n\n$\\circ$$\\circ$ **Analyses on our method and other conventional unfolding based networks:**\n\nThe unfolding framework mainly contains two phases: 1) constructing a model-based algorithm for a certain low-level task; 2) designing a network to implement the previous algorithm.\n\n①\t ***The drawback of the conventional unfolding based networks:***\n\nIn the conventional unfolding framework, 1) the model-based algorithm is generally based on the traditional maximum a posteriori method, which models the image formation process in image space and integrates image priors into the prediction; 2) the network is designed to implement the iterative steps of the model-based algorithm.\n\nSince the input and output of the reconstruction update step in the model-based algorithm are in image space (i.e., the algorithm limitation on input and output), the input and output of the corresponding reconstruction sub-network ($\\mathbf{z}$-Net) must be in image space as well no matter how $\\mathbf{z}$-Net is designed. Therefore, the information flow among (not inside) sub-networks will be constrained to a very low dimension, resulting in performance decrease. **This is a drawback caused by the algorithm limitation in the conventional unfolding framework.**\n\n②\t***Using LS to break through the limitation of the conventional unfolding framework:***\n\nLS plays important roles in both the model-based algorithm and network design: 1) for the model-based algorithm, it can model the image formation process in high-dimensional LS and integrate LS priors into the prediction, instead of in the image space; 2) for the network design, the input and output of each $\\mathbf{z}$-Net in the unfolding framework are transferred to the high-dimensional LS instead of the image space, which improves the information flow among the sub-networks.\n\n$\\circ$$\\circ$ **Differences between our method and other conventional networks for low-level tasks with feature transform on the input image:**\n\nLS leads to an encoding module $E$ at the beginning of the whole network, which seems to be similar to some conventional networks for low-level tasks, e.g., RCAN [*1] and RDN [*2], that extract features through a convolution layer at the beginning of the whole network. In fact, they have the following differences:\n\n①\t***Differences in motivation:***\n\nLS can address the drawback caused by the algorithm limitation in the conventional unfolding framework, and it focuses on the information flow between different reconstruction sub-networks (i.e., $\\mathbf{z}$-Nets). In contrast, for the conventional networks extracting feature by a convolutional layer at the beginning of the whole network, the motivation is to achieve the necessary channel number increasing operation in the deep network, instead of considering the information flow among different stages.\n\n②\t ***Differences in usage:***\n\nSome conventional networks for low-level tasks, e.g., RCAN [*1] and RDN [*2], extract features through a convolution layer at the beginning of the whole network, and then use a skip connection from the initial feature to the output feature. Finally, the output feature is mapped to the image space. However, placing a convolution layer at the beginning of the whole network is optional. According to RIDNet [*3] and AINDNet [*4], a skip connection from the input image and output image can be directly used, and a convolution layer can be placed at the beginning of the residual branch instead of the whole network. According to the results reported in various papers, for a certain network, placing a convolution layer at the beginning of the whole network or the residual branch can achieve similar performance.\n\nIn contrast, according to the proposed model-based algorithm, $E$ must be placed at the beginning of the whole network to obtain the LS embeddings. Otherwise, it will reduce to the case of iteration in the image space due to the algorithm limitation on the input and output of $\\mathbf{z}$-Net, resulting in significant performance decrease.\n\nWe will add more illustrations about LS in the revised version.\n\n[*1] Image Super-Resolution Using Very Deep Residual Channel Attention Networks, ECCV 2018\n\n[*2] Residual Dense Network for Image Super-Resolution, CVPR 2018\n\n[*3] Real Image Denoising with Feature Attention, ICCV 2019\n\n[*4] Transfer Learning from Synthetic to Real-noise Denoising with Adaptive Instance Normalization, CVPR 2020", " First, I really appreciate your detailed reply to my concerns and questions. Your reply solved most of my concerns. \n\nThe most important, you differentiated your work from DAN.\n\nBut on the question of why emphasizing the LS, I still not that persuaded. I can understand that the LS represents better than the low-dimensional space, but to the best of my knowledge, most of the deep-learning based methods of low-level tasks including SR, low-light enhancement, image denoising, image restoration and so on usually use a convolutional layer to transform the input image to its corresponding feature. This seems a convention of the methods of multiple low-level tasks. Can you differentiate your LS from the previous works?\n\nAt last, based on your reply, I will raise my final rating to BA. But I still expecting your further discussion on the proposed LS.\n", " **Comment:**\nWe thank the reviewer for the valuable comments and approval for our work.\n\n>**The experimental evaluation section lacks complexity and time cost analysis and comparison.**\n\nThe complexity and time for $512\\times512$ images have been reported in Sec. ‘S8. Computational Complexity and Inference Time’ of the ‘Supplementary Material’. Moreover, for $256\\times256$ images, the FLOPs and inference times of ScaoedNet with 1 stage, 3 stages and 5 stages are 53G/0.02s, 160G/0.05s, and 268G/0.08s, respectively. For the second-best method DeamNet, it is 146G/0.05s. We can further reduce the complexity by using the multi-scale encoder-decoder based UNet architecture similar to DeamNet with four scales, where each encoder or decoder can consist of two FM$^2$ARBs. Then, the FLOPs in this case for 256$\\times$256 images will reduce to about 130G. We will add more analyses in the revised version.\n\n>**Section 3 is filled with an overwhelming amount of implementation detail that are better suited for supplementals. It would be better to put emphasis on theoretic/algorithmic reasoning & insights, and put implementation details in section 4 or a dedicated subsection in section 3.**\n\nThank you for your suggestion. We will pay more attention to the clarity of our paper in the revised version. Specifically, we will put the implementation details into Sec. 4 and the ‘Supplementary Material’, and add more theoretic/algorithmic reasoning and insights in Sec. 3, making it easier for readers to follow.\n\n>**Is the proposed model based framework applicable to other image restoration problems, e.g. deblurring, deraining, etc. ? This paper only discusses denoising and the value of the proposed methodology is therefore limited.**\n\nAs discussed in S11 ‘Limitations and Future Works’ of the ‘Supplementary Material’, the method can be potentially applied to other image restoration (IR) problems. We take image deblurring problem as an example. Image deblurring can be expressed as:\n\n$\\mathbf{y}=\\mathbf{x}*\\mathbf{k}+\\mathbf{n}$\n\nwhere $\\mathbf{y}$ is the degraded image, $\\mathbf{x}$ is the ground-truth image, $\\mathbf{k}$ is the blur kernel, $\\mathbf{n}$ is the noise, and $*$ denotes the convolutional operation. Then, Eq. (3) in the manuscript becomes:\n\n$\\begin{Bmatrix} \\hat{\\mathbf{z}},\\hat{\\mathbf{u}},\\hat{\\mathbf{k}} \\end{Bmatrix}=\\arg\\min\\_{\\mathbf{z},\\mathbf{u},\\mathbf{k}} \\mathcal{H}(\\mathbf{z},\\mathbf{u},\\mathbf{k},\\mathbf{y})+\\tau\\mathcal{G}(\\mathbf{z},\\mathbf{g})+\\eta\\_1\\psi(\\mathbf{u})+\\eta\\_2\\phi(\\mathbf{k}), s.t., \\hat{\\mathbf{x}}=D(\\hat{\\mathbf{z}})$\n\nwhere $\\tau, \\eta\\_1, \\eta\\_2$ are the weights for the regularizers $\\mathcal{G}(\\mathbf{z},\\mathbf{g})$ (guidance constraint, GC), $\\psi(\\mathbf{u})$ (noise map prior), and $\\phi(\\mathbf{k})$ (blur kernel prior). By using our self-correction (SC) alternative optimization, we can obtain\n\n$\\begin{cases}\n\\mathbf{g}^{(i)} = G(\\mathbf{u}^{(i)}, \\mathbf{k}^{(i)}),\\\\\\ \\mathbf{z}^{(i + 1)} = \\arg\\min_{\\mathbf{z}} \\mathcal{H}(\\mathbf{z}, \\mathbf{u}^{(i)},\\mathbf{k}^{(i)}, \\mathbf{y})+\\tau\\widetilde{\\mathcal{G}}(\\mathbf{z},\\mathbf{g}^{(i)}, \\mathbf{z}^{(i)}),\\\\\\ \\mathbf{u}^{(i+1)}=\\arg\\min_{\\mathbf{u}}\\mathcal{H}(\\mathbf{z}^{(i+1)}, \\mathbf{u},\\mathbf{k}^{(i)},\\mathbf{y})+\\eta\\_1\\widetilde{\\psi}(\\mathbf{u}, \\mathbf{u}^{(i)}),\\\\\\ \\mathbf{k}^{(i+1)}=\\arg\\min_{\\mathbf{k}}\\mathcal{H}(\\mathbf{z}^{(i+1)}, \\mathbf{u}^{(i+1)},\\mathbf{k},\\mathbf{y})+\\eta\\_2\\widetilde{\\phi}(\\mathbf{k}, \\mathbf{k}^{(i)})\n\\end{cases}$\n\nwhere $G(\\cdot)$ is the guidance information generator, $\\widetilde{\\mathcal{G}}(\\cdot)$ becomes the joint constraint of GC and SC for $\\mathbf{z}$, $\\widetilde{\\psi}(\\cdot)$ becomes the joint constraint of noise information and SC for $\\mathbf{u}$, and $\\widetilde{\\phi}(\\cdot)$ becomes the joint constraint of blur kernel and SC for $\\mathbf{k}$.\n\nBy comparing this equation with Eq. (5) in the manuscript, we can find the difference between the denoising and deblurring using our method is: in addition to the estimation of $\\mathbf{z}$, $\\mathbf{u}$, we have to estimate the blur kernel information $\\mathbf{k}$ and construct the new guidance information $\\mathbf{g}$ for deblurring. How to construct the $\\mathbf{k}$ estimation module and the guidance information generator for deblurring is very important. Other image restoration problems can be solved similarly. This is also one of our future works. We will add corresponding illustrations and analyses in the revised version.\n\n```\nWe hope that the responses alleviate the reviewer's concerns. We are happy to answer any additional questions the reviewer has.\n```", " **Comment:**\nWe thank the reviewer for the constructive comments.\n\n>**Authors should pay more attention to clarity, e.g., the abstract and some figures.**\n\nWe will improve the clarity. For example, we will make the abstract more appropriate, and carefully revise the layout and the visualization of figures. We will add “As shown in Fig. 1, the whole architecture of ScaoedNet achieves the best promising visual performance when compared to other denoising methods.” in Sec. 1. For Fig. 2, we will add the network names (such as $\\mathbf{u}_{\\text{ini}}$-Net, $\\mathbf{u}$-Net, and $\\mathbf{z}$-Net) into its sub-title.\n\n>**Clear theoretical reason and significance for LS are not indicated. In addition, the LS scheme is the essential for the performance, but there is no enough explicit explanation for its ablation study.**\n\nTheoretically, without using LS, Eq. (5) becomes:\n\n\\begin{cases}\n\\mathbf{g}^{(i)}=G(\\mathbf{u}^{(i)}),\\\\\\ \\mathbf{x}^{(i+1)}=\\arg\\min_{\\mathbf{x}}\\mathcal{H}(\\mathbf{x}, \\mathbf{u}^{(i)}, \\mathbf{y})+\\tau\\widetilde{\\mathcal{G}}(\\mathbf{x},\\mathbf{g}^{(i)}, \\mathbf{x}^{(i)}),\\\\\\ \\mathbf{u}^{(i+1)}=\\arg\\min_{\\mathbf{u}}\\mathcal{H}(\\mathbf{x}^{(i+1)}, \\mathbf{u},\\mathbf{y})+\\eta\\widetilde{\\psi}(\\mathbf{u}, \\mathbf{u}^{(i)}).\n\\end{cases}\n\nThe denoising is performed in the pixel domain to directly reconstruct $\\mathbf{x}\\in\\mathbb{R}^{n\\cdot c}$. It will lead to a $\\mathbf{z}$-Net with a very low input/output channel dimension $c$, and thus the information flow in the unfolding network will be reduced.\n\nAs shown in the original Eq. (5), by using LS, denoising is performed in LS to reconstruct $\\mathbf{z}\\in{ \\mathbb{R}^{n\\cdot m}}$, where $m$ is much larger than $c$. Thus, the $\\mathbf{z}$-Net has more sufficient representation ability than the one without LS, and can achieve higher performance. In addition, the channel number for information flow between $\\mathbf{z}$-Nets is larger compared to the version without LS, and thus the network information flow is enhanced.\n\nThese reasons make the LS scheme essential for the performance. In Sec. 4.5 ‘Ablation Study’, ‘w/o LS’ corresponds to the network implementation without LS, and the performance decreases significantly in this case, which verifies our analyses. In addition, by visualizing the features after $E$, we find it can well separate the image signal related features and noise related components, which also makes it easier to denoise complex real noise.\n\nThese analyses will be added in the revised version.\n\n>**The author has already chosen the SSIM as one evaluation, why repeatedly adopt SSIM loss in the total loss function?**\n\nCombining the $L_1$ or $L_2$ loss with the SSIM loss is very common in image restoration. E.g., with SSIM evaluation, [*1, *2] also use the SSIM loss. Specifically, the reconstruction loss in Eq. (6) contains two parts: 1) the $L_1$ loss between the output and the ground-truth image, to ensure the pixel-level similarity; 2) the SSIM loss $\\mathcal{L}_{\\text{S}}$, to pay attention to image structure preservation. We have analyzed the role of the SSIM loss in Sec. 4.5. The results show that it can slightly improve performance. But even without it, our method can still achieve good performance.\n\n[*1] Memory-efficient Hierarchical Neural Architecture Search for Image Denoising, CVPR 2020\n\n[*2] Invertible Denoising Network: A Light Solution for Real Noise Removal, CVPR 2021\n\n>**Can author provide more comparison with other noise estimation network to further verify the effectiveness of NFAE layer?**\n\nWe add the comparison with other noise estimation networks from AINDNet [*3] and PRIDNet [*4]:\n\n|Method|$L_1$ Distance$\\downarrow$|PSNR$\\uparrow$\n:-:|:-:|:-:\nwith NFAE|5.80|33.16\nAINDNet|6.11|32.92\nPRIDNet|6.19|33.01\n\nThe results show that the performance of our method with NFAE is still the best. We will add these results in the revised version.\n\n[*3] Transfer Learning from Synthetic to Real-Noise Denoising with Adaptive Instance Normalization, CVPR 2020\n\n[*4] Pyramid Real Image Denoising Network, VCIP 2019\n\n>**Authors can compare the proposed method with more advanced traditional methods, e.g., PNMM; DCDicL.**\n\nSince the code of PNMM is not provided, we will cite it in the revised version. For DCDicL, we have tested it for real noisy images by the provided model. Since the additive white Gaussian noise (AWGN) level is needed as an input, we empirically set it to 75 for the best performance: \n\n|Method|SIDD Validation|SIDD Benchmark|DnD Benchmark\n:-:|:-:|:-:|:-:\nDCDicL|33.76/0.8171|33.68/0.860|35.90/0.9150\nOurs|39.48/0.9186|39.44/0.956|40.12/0.9603\n\nDCDicL gets lower results because it is designed for AWGN. By combining our real noise map estimation network with DCDicL, and then retraining it for real noise, better results may be obtained. We will analyze these in the revised version.\n\n```\nWe hope that the responses alleviate the reviewer's concerns. We are happy to answer any additional questions the reviewer has.\n```", " **Comment:**\nWe thank the reviewer for the thoughtful and detailed comments.\n\n>**The description of the total loss is not clear enough.**\n\nAccording to Eq. (6), our total loss is as follows:\n\n$\\mathcal{L}(\\mathbf{\\Theta})=\\frac{1}{N}\\sum_{g=1}^{N}(\\Vert D(\\mathbf{z}\\_g^{(K)})-\\mathbf{x}\\_g\\Vert_{1}^1+\\gamma\\mathcal{L}\\_{\\text{S}}(D(\\mathbf{z}\\_g^{(K)}), \\mathbf{x}\\_g))+\\frac{\\eta}{N}\\sum_{i=1}^{K-1}\\sum_{g=1}^{N}\\Vert \\kappa\\_{i,g}\\otimes(\\hat{\\sigma}\\_{g}^{(i)}-\\sigma(\\mathbf{x}\\_g) )\\Vert_{1}^1$\n\nIt contains two main parts: the reconstruction loss and the noise map estimation loss.\n\nThe reconstruction loss, i.e., $\\mathcal{L}(\\mathbf{\\Theta})=\\frac{1}{N}\\sum_{g=1}^{N}(\\Vert D(\\mathbf{z}\\_g^{(K)})-\\mathbf{x}\\_g\\Vert_{1}^1+\\gamma\\mathcal{L}\\_{\\text{S}}(D(\\mathbf{z}\\_g^{(K)}), \\mathbf{x}\\_g))$,further contains two sub-parts: 1) $L_1$ loss between the output and the ground-truth image to ensure their similarity in pixel-level; 2) the structural similarity loss $\\mathcal{L}_{\\text{S}}$ to pay attention to image structures. In other words, in the reconstruction loss, both the pixel-level constraint and the structure-level constraint are considered simultaneously to ensure the quality of reconstruction.\n\nThe noise map estimation loss is $\\frac{\\eta}{N}\\sum_{i=1}^{K-1}\\sum_{g=1}^{N}\\Vert \\kappa\\_{i,g}\\otimes(\\hat{\\sigma}\\_{g}^{(i)}-\\sigma(\\mathbf{x}\\_g) )\\Vert_{1}^1$. That means the estimated noise maps of the first to $(k-1)$th degradation estimation (DE) networks are constrained to be close to the ground-truth one.\n\nThe weight of each stage is determined by $\\kappa\\_{i,g}=\\nu_i\\cdot\\alpha\\_{g}$, where $\\nu_i$ is the weight for the $i$th DE network. Considering that in multiple DE stages, the later DE network will produce more accurate estimation, and thus a larger weight should be assigned. Therefore, the geometric sequence with a common ratio $\\iota$ (greater than 1) and sum 1 can be used for $\\nu_ i$-s, i.e., $\\nu_{i}={(\\iota-1)}\\iota^ {i-1}/{(\\iota^{K-1}-1)}$. In addition, the parameter $\\alpha_{g}$ is the $g$th element of the indicator vector $\\alpha$ for the noise constraint. The reason for introducing $\\alpha$ is given in the following. For the commonly used training datasets like SIDD, RENOIR, etc., in real denoising, the specific noise maps $\\sigma(\\mathbf{x}\\_g)$-s are unknown, and thus cannot be used for training the DE network. In this case, the noise map estimation loss should be invalidated by setting its weight to 0; to update the parameters of the DE networks, we need to synthesize the dataset according to the real noise model established in Eq. (2) for training. Therefore, when the training data is synthetic with a known noise map, the weight for the noise map estimation loss should be 1. This is why we call $\\alpha_{g}$ the $g$th element of the indicator vector $\\alpha$ for the noise constraint.\n\nWe will add these details in the revised version.\n\n>**Mapping noise image to space that better distinguish noise has been proposed in NBNet. Please clarify the difference between you and NBNet on how to use the distinguishable space, and the superiority of your usage.**\n\n\\** **The difference between using the distinguishable space:**\n\n+ The space projection in NBNet is achieved by the subspace attention (SSA) module, which needs to construct the basis vector for obtaining the projection matrix. It is essentially an attention module. Our method directly using $E$ to project the input image to the distinguishable space without using attention mechanism;\n\n+ NBNet uses SSA to project the output of each decoder module separately in a UNet-based architecture, and thus multiple SSA modules are exploited. However, our method only needs to introduce the latent space (LS) at the beginning of the whole network to ensure the whole reconstruction process is carried out in the high-dimensional LS;\n\n+ SSA in NBNet requires two inputs, i.e., the low-level feature from skip-connection and the upsampled high-level feature. Specifically, the low-level feature from skip-connection is projected into the signal subspace guided by the upsampled high-level feature. However, our method only needs one single input, i.e., the input noisy image.\n\n\\** **Superiority of our usage:**\n\n+ To obtain the projection in NBNet, a lot of matrix operations are required. But the projection of our method can be more easily obtained by the $E$ module, avoiding matrix operations;\n\n+ NBNet requires two inputs, where one is used as the guidance for the other one. Thus, it cannot be directly used in our work since only one noisy input is available at the beginning of our network. However, since LS only needs a single input, it can directly perform projection on the noisy input image at the beginning of the network.\n\nWe will add these analyses in the revised version.\n\n```\nWe hope that the responses alleviate the reviewer's concerns. We are happy to answer any additional questions the reviewer has.\n```", " **Comment:**\nWe thank the reviewer for the valuable comments.\n\n>**The proposed method is seemingly an incremental method to DAN. I think that it somehow lacks of novelty.**\n\nAlthough both DAN and our method adopt alternative optimization (different tasks, i.e., super-resolution (SR) and denoising), **their theoretical novelties and the network novelties are much different.**\n\n\\** **Theoretical differences**\n\n+ DAN performs SR in low-dimensional pixel space, while ours performs denoising in high-dimensional LS; \n\n+ DAN only considers degradation estimation and reconstruction. In addition to degradation estimation ($\\mathbf{u}$-Net) and reconstruction ($\\mathbf{z}$-Net), we introduce guidance representation (GR, $G$-Module) to improve performance; \n\n+ Traditional alternating optimization is used in DAN. But we propose a novel self-correction (SC) alternating optimization method.\n\n\\** **Network differences**\n\n+ By using SC, our degradation estimation and reconstruction networks can better exploit the last estimates $\\mathbf{z}^{(i)}$ and $\\mathbf{u}^{(i)}$ for higher performance than DAN that is without SC; \n\n+ We introduce $G$-Module to guide denoising, which is not considered in DAN; \n\n+ We propose the FM$^2$A module with FSM and DEM, which is different from DAN; \n\n+ For $\\mathbf{u}$-Net, we propose a novel parameter-free NFAE layer, which is not used in DAN.\n\nWe will add the analyses in the ‘Supplementary Material’.\n\n>**I miss a general description of the total pipeline. I suggest the authors to modify the texts of Secs. 2, 3.**\n\nWe will add a general description in the revised version. E.g., we will add an introduction at the beginning of Sec. 2; In Sec. 3, we will modify the introduction of each sub-network. The algorithm flow chart corresponding to Sec. 2 has been reported in the ‘Supplementary Material’. Similarly, we will further add a flow chart corresponding to Sec. 3.\n\n>**I suggest the author choose clearer results to replace Fig. 4 and the upper row of Fig. 5.**\n\nWe will update these figures to make the visual effect more impressive.\n\n>**Whether these latent embeddings have special semantics? Why the authors emphasize it in the paper?**\n\nBecause of the importance of LS, we emphasize it in the paper. LS has the following advantages: 1) better representation ability than the low-dimensional space; 2) allows better information flow in the unfolding network.\n\nWithout LS in Eq. (5), denoising is performed in the pixel domain to reconstruct $\\mathbf{x}\\in\\mathbb{R}^{n\\cdot c}$. It will lead to the $\\mathbf{z}$-Net with a low input/output channel dimension $c$, reducing the information flow in the network. \n\nWith LS in Eq. (5), denoising is performed in LS to reconstruct $\\mathbf{z}\\in{\\mathbb{R}^{n\\cdot m}}$. Since $m>c$, $\\mathbf{z}$-Net has better representation ability. Moreover, the channel number between two consecutive $\\mathbf{z}$-Nets becomes larger, and thus the network information flow is enhanced. All of these will benefit the performance.\n\nWe can also observe from Sec. 4.5 that LS is important. Moreover, by visualizing the features after $E$, we can find that the embeddings are the hierarchical high-dimensional features of the noisy image, where the noise and image components can be easily decoupled, which can effectively promote the performance.\n\nThese reasons and the visual features will be added in the revised version.\n\n>**Have the authors tried self-attention mechanism? If not, why?**\n\nWe have considered using self-attention mechanism. However, because of its high parameter number and complexity, we finally adopted FSM for a balance between performance and complexity. Replacing all FSM modules with the self-attention modules in [*1], the parameter number will be 5.3M and the FLOPs will be 421G for a $256\\times256$ image, which is much larger than those of our current method. How to use self-attention more efficiently in ScaoedNet will be added as a future work..\n\n[*1] Uformer: A General U-Shaped Transformer for Image Restoration, CVPR 2022\n\n>**The formulation is seemingly weak while the performances are excellent. I totally agree that this is a great work, but have you thought about to submit your work to ECCV or CVPR? I think it would be more suitable.**\n\nECCV/CVPR indeed are suitable conferences, but we considered the following aspects: 1) our work includes both theoretical novelty and network novelty. The theoretical part is different from the existing methods, and has promising performance; 2) some good works with similar style to ours have been published in NeurIPS, e.g., DAN, [*2, *3]. Therefore, we submitted it to NeurIPS. We will improve the formulation in the revised version.\n\n[*2] Listening to Sounds of Silence for Speech Denoising,NIPS2020.\n\n[*3] Joint Sub-bands Learning with Clique Structures for Wavelet Domain Super-Resolution,NIPS2018.\n\n```\nWe hope that the responses alleviate the reviewer's concerns. We are happy to answer any additional questions that the reviewer has.\n```", " This paper proposes a real-world image denoising model with alternative optimization. The formulation is carefully discussed in order to ensure the validity of the proposed pipeline. This is an interesting approaching to solve the problem of image denoising. Experiments and ablation studies are sufficient to prove the superiority of the proposed method over other state-of-the-art methods and the efficiency of the designed modules. Strengths:\n\n1.\tThe method of alternative optimization is novel to image denoising. In order to adopt this method, the formulation of denoising is strictly discussed and the pipeline is carefully designed.\n\n2.\tThe definition of the problem and the corresponding solution are clear. Instead of regarding NNs as a ‘black box’, the authors try to analysis the building reasons of every block.\n\n3.\tThe experiments and ablation studies are sufficient. The performance of the proposed method is better than SotA methods clearly.\n\nWeakness:\n\n1.\tThe proposed method is seemingly an incremental method to DAN [1]. Although the pipeline is specifically designed for the task of image denoising, I think that it somehow lacks of novelty.\n\n2.\tI miss a general description of the total pipeline. Such a complex framework is a little bit hard to read. I suggest the authors to modify the texts of Sec. 2 and 3.\n\n3.\tThe subjective results shown in the paper is not that impressive. I suggest the author choose clearer results to replace Fig. 4 and the upper row of Fig. 5.\n\n[1] Z. Luo, Y. Huang, S. Li, L. Wang, and T. Tan. Unfolding the alternating optimization for blind super resolution. In Advances in Neural Information Processing Systems (NeurIPS), volume 33, pages 5632–5643, 2020.\n 1. The embeddings in the proposed Latent Space (LS) are seemingly just convolutional features of input images. Whether these latent embeddings have special semantics? If not, why the authors emphasize it in the paper (even in the title)?\n\n2.\tI notice that the proposed method uses improved channel attention (CIG-CA) and spatial attention (CIG-SA). Have the authors tried self-attention mechanism? If not, why?\n\n3.\tThe formulation of the proposed method is seemingly weak while the performances are excellent. I totally agree that this is a great work, but I’d like to ask the authors that have you think about to submit your work to ECCV or CVPR? I think it would be more suitable for those conferences instead of NeurIPS.\n The limitations are clearly addressed in supplementary materials. This work does not have any potential negative societal impact.", " This paper proposed an enhanced latent space blind model based deep unfolding network, namely ScaoedNet, for complex real image denoising. Experiments demonstrate the superiority of ScaoedNet over many SOTA methods. Strength:\n \n1. An enhanced model-based denoising cost function is proposed to implicitly optimize the ScaoedNet, which addresses the challenging task of manually designing the optimal operators and making the optimal process more interpretable. \n\n2. The effective NFAE layer leads to better results without increasing extra parameters.\n\nWeaknesses:\n\n1. The description of the total loss is not clear enough. \n\n2. Map noise image to space that better distinguish noise has been proposed in [1] and therefore cannot be made as a prominent contribution.\n\n[1] 1. Cheng, Y. Wang, H. Huang, D. Liu, H. Fan, and S. Liu. Nbnet: Noise basis learning for 356 images denoising with subspace projection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4896–4906, Jun. 2021.\n Please clarify the difference between you and [1] on how to use the distinguishable space (named latent space or subspace), and the superiority of your usage. See in weakness.", " In this paper, the authors proposed a method for real image denoising. The authors designed the enhanced model-based denoising cost function by introducing latent space, noise information and guidance constraint. And then implement the proposed self-correction alternative optimization algorithm to minimize cost function via deep network. Besides, for higher performance, some optimized modules of subnetwork are proposed, including noise feature adaptive enhancement (NFAE) layer and feature multi-modulation attention residual block (FM2ARB). Motivated by the advances in deep networks and relying on the rich body of the model based methods, authors propose a novel enhanced latent space (LS) blind model based deep unfolding network for real image denoising. Extensive experiments verify that this method achieves excellent performance on real image denoising. \nHowever, authors should pay more attention to clarity of the paper. For example, the abstract of this paper is too mess to emphasis the most important novelty. Author needs to write it in brief and appropriately. \nBesides, the layout and the visualization of some figures also need to be attached. \ne.g., the Figure 1 is not mentioned in the text but shown. In Fig.2, the name of subnetworks cannot be corresponded to the figure note. Readers still need to find each subnetwork’s name through article. 1. In ‘Analysis and Enhancement’ part, the author represents that LS encoding function can obtain high-dimensional image embeddings instead of limited squared error in low-dimensional image space. However, no clear theoretical reason and significance are indicated.\n2. As mentioned in the article, LS scheme is the essential for the performance of this work. However, there is still no enough explicit explanation for its ablation study.\n3. The author has already chosen the SSIM as one evaluation, why repeatedly adopt SSIM loss in the total loss function? \n4. Can author provide more comparison with other noise estimation network to further verify the effectiveness of NFAE layer?\n5. Authors also can compare the proposed method with some more advanced traditional methods, e.g.: Exemplar-Based Denoising: A Unified Low-Rank Recovery Framework, IEEE Transactions on Circuits and Systems for Video Technology, 2020; Deep Convolutional Dictionary Learning for Image Denoising,CVPR, 2021. In this paper, the method that authors proposed makes full use of the advantages of classical denoising methods and deep network, which no longer sticks to AWGN and contributes to real image denoising to a certain extent.", " This paper proposes an image denoising network based on latent space self-correction alternative optimization. A deep unfolding network implements alternative optimization, and state of the art denoising performance is achieved on multiple benchmark datasets. Strengths: \n- the proposed method is backed with classic model based image restoration theory that provides strong regularization on the network design, which allows the proposed method to achieve state of the art performance with a modest model size.\n- implementation detail of the proposed network is well documented for reproduction\n- extensive ablation study is conducted to analysis each individual component of the proposed network architecture\n\nWeakness:\n- The experimental evaluation section lacks complexity and time cost analysis and comparison\n- section 3 is filled with an overwhelming amount of implementation detail that are better suited for supplementals. It would be better to put emphasis on theoretic/algorithmic reasoning & insights, and put implementation details in section 4 or a dedicated subsection in section 3. Right now section 3 provides very limited insight into *how* the network implements and enforces those ideas. 1. Would raise score if inference complexity/time is on par with other state of the art methods.\n2. Is the proposed model based framework applicable to other image restoration problems, e.g. deblurring, deraining, etc. ? This paper only discusses denoising and the value of the proposed methodology is therefore limited. N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "j7lHtgmyz51", "NlewbzwMfY-", "vefsm2Hwbs7T", "xUbYbiucJnI", "WBCHVg9lwU", "MEAoqydzf9m", "cs5Qu9yXAtE", "aVi0roC7IQG", "nips_2022_Qt4rKNYzcO", "nips_2022_Qt4rKNYzcO", "nips_2022_Qt4rKNYzcO", "nips_2022_Qt4rKNYzcO" ]
nips_2022_2nYz4WZAne4
Generative Evolutionary Strategy For Black-Box Optimizations
Many scientific and technological problems are related to optimization. Among them, black-box optimization in high-dimensional space is particularly challenging. Recent neural network-based black-box optimization studies have shown noteworthy achievements. However, their capability in high-dimensional search space is still limited. This study proposes a black-box optimization method based on evolution strategy and generative neural network model. We designed the algorithm so that the evolutionary strategy and the generative neural network model work cooperatively with each other. This hybrid model enables reliable training of surrogate networks; it optimizes multi-objective, high-dimensional, and stochastic black-box functions. In this experiment, our method outperforms baseline optimization methods, including , including evolution strategies, and a Bayesian optimization.
Reject
While the topic of the paper and the reported experimental results appeared to be of interest of the reviewing team, a number of limitations were put to the fore by the reviewers, who graded the paper with scores between 2 and 5, and often emphasized various issues such as a lacunary literature review (See in particular comments of Reviewer wE4h), insufficient comparison with state-of-the-art approaches and in particular on realistic test-casses (bPMJ), as well as on the lack of clarity (See for instance in Reviever uzBL’s review: „The paper is poorly structured and written to the point where it is quite difficult to understand“ ). For all these reasons I recommend rejection.
val
[ "URrcJOKkCt", "9Qp49S7q6ke", "eulxqC6A3sm", "Y6IbK70l6m", "93Hq1E1ihes", "R9av0CHDtFk", "VmDvzKmmM7X", "3y_j2y_hJS", "8Wl0CqN2cQz", "_Ys0WKRx4J3", "3ZNCrCde4Pe", "ZoC5zoOK7e", "g8BswzUN727", "E2nq74rvYGq", "78PKyQx30TJK", "iAhth3NqgT1", "GL55imJ7D-A", "YpCyGK0O7EF", "RItkzVUquQ_", "QKlduAyTPTW", "mQZJRhuqww8", "bUbqRPGHKq" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to your valuable review, we were able to improve the revision a lot.\n\nIn the first submission, their were a lot of insufficient explanation.\nIn particular, it seems that the purpose of each experiment was not sufficiently explained. (Catpole-V1, LeNet-5)\nIn the revision, we tried to write a detailed description, according to the opinions of the reviewers.\nAdditional experiments have also been added.\n\nRegarding sentence structures and statements, we are still working on making the paper easier to read. But, we agree that we need additional proofreading.\n\nThank you.", " Thanks for the responses and updates. Most of my specific concerns were addressed. I believe the method is actually quite interesting and achieves good results; however, the writing is still extremely poor. While the authors made some modifications during the rebuttal period, there are still a very large number of imprecise statements, non-academic language and grammatical errors in the manuscript. I would highly recommend giving the paper a *thorough proofreading*. As-is, the writing is still so difficult to parse that it significantly affects my understanding of the paper. \n\nI believe many of the other reviewers's concerns can be addressed with better writing. This includes things like clarity on experimental details (this is particularly important), placing / framing / motivating the work more precisely with respect to related work, and improved grammar and general writing (which would make it *much* easier to read).\n\nI will update my score to reflect the fact that the underlying method seems interesting enough. \n\n======\n\nA brief note on the RL papers:\n\nAlthough the papers evaluate simple genetic/evolution algorithms in reinforcement learning, they are still treating the reinforcement learning as a blackbox problem, where they output policy parameters. Their methods should still be able to apply to these simpler settings just as well. \n\n", " I think it's also a good idea to use BO or PSO instead of ES.\nHowever, these cases will be very different from the original idea of GEO, and we will need to devise a new mehtod for the combination strategy of the generative model and BO (or PSO).\n\nFor example, the core idea of BO is an exploit & explore strategy through likelihood scores.\n(This method can be very efficient, but it has the disadvantage of having non-linear complexity.)\nWhen combining a neural network with BO, we can think of a neural network model that can estimate the maximum likelihood score.\nThis is can be a good approach, but it's too different from the core idea of GEO.\n\nMeanwhile, the core idea of PSO is a combination of single particle velocity and group velocity.\nThe velocity is explicitly defined by the score and the variable $x$, so I'm not sure how to combine them with a neural network.\nOne method might be to alternate between PSO and surrogate-model based optimization (SO).\n\n$PSO$ $\\rightarrow$ $SO$ $\\rightarrow$ $PSO$ $\\rightarrow$ $SO$ $\\rightarrow$ $\\cdots$\n\nHowever, this is not a combination method, but this is a simple alternating scheme.\n\nGEO is not a simple alternating method of \n\n$ES$ $\\rightarrow$ $SO$ $\\rightarrow$ $ES$ $\\rightarrow$ $SO$ $\\rightarrow$ $\\cdots$\n\nBecause GEO's evolution and generative models are parts that cannot be used in isolation. (The evolutionary algorithm use generators ($G$) as elements, not variables ($x$). Meanwhile generators diverge without evolution)\n\nSo right now, I haven't figured out how to apply BO or PSO to SO, but we can try them in our next study.\n\nThanks for the good idea", " Thank you for your answer.\n\nThis problem happens when the cost of observation is higher than the cost of operation.\nAnd it is strongly related to the real-world problem I mentioned earlier.\n\nA \"Verilog simulator\" is an electronic device simulator, and it has a time-sequential I/O structure.\n(There are many different types of simulators and device testers, but for the sake of simplicity, we will call them \"Verilog\".)\n\nIn this simulator, the I/O structure is defined as a time sequential structure as follows.\n\nInput : $[Action_1, Action_2, ... ]$\nOutput: $[(State_1, Reward_1), (State_2, Reward_2), ...]$\n\nAnd training of RL is done as follows\n\n$Action_1$ $\\rightarrow$ $Observe(State_1, Reward_1)$ $\\rightarrow$ $training$ $\\rightarrow$ $Action_2$ $\\rightarrow$ $Observe(State_2, Reward_2)$ $\\rightarrow$ $training$ $\\cdots$\n\nSo some people think that this problem can be solved with RL.\n\nHowever, in the Verilog, observing the time-sequential data requires much more cost than observing the sum of entire result at once.\nThis is because, in the Verilog simulation, the observation cost of each time sequence $(State_t, Reward_t)$ is almost equal to the observation cost of the $score = \\sum (rewards)$ at the final time.\n\nIn other words, when the time sequence length is N, the cost of $[(State_1, Reward_1), (State_2, Reward_2), ...]$ observation is almost N times of the observation cost of $score = \\sum (Rewards)$.\nThis means that we cannot practically observe time-sequential data of a Verilog simulator.\nIn addition, $score = \\sum (Rewards)$ is much more valuable data than each $(State_t, Reward_t)$.\n\nOur Cartpole experiment is a toy model for this situation.\nWe assume that the only observable information is $score = \\sum (Rewards)$ rather than $[(State_1, Reward_1), (State_2, Reward_2), ...]$, as in the Verilog situation.\n\nWe are well aware that Cartpole-V1 does not have the observation cost problem, but we use Cartpole-V1 as a toy model to describe what happens in the real-world problems.\n\nPS. When we talk about \"cost\", it is not only a time cost, but also an expense in dollars.\n\nImagine that you need to optimize a mass production process in a factory.\nObserving the intermediate process incur huge costs (because the manufacturing process must be stopped for observations). This is the big difference between a simple simulator and a real problem.\n\nIn short : The more data you observe, the higher the expense is (in dollar). In this case, it is very important to observe only the final score which has the valuable information.\n\n---\nWe updated additional explanation of the observation cost problem in the supplement.", " Many RL are tested on video games that are regarded as black-box simulators, i.e., no additional information of the transition model of the video games is used to train RL. So I do not think \"RL cannot be used to solve this problem\". At least, a non-customized RL method can be applied to show the baseline performance of this type of methods.\n\n\"Let’s consider simulation time is 1h, and steps are 10,000. Then, the total time will be 10,000h just for one iteration.\" I do not understand this statement. For RL, the 10000 sequential actions will be performed in one simulation, not 10000 simulations. And gradient descent may quickly converge to an optimum and thus reduce the iterations. And this statement seems like a drawback of ES that ES usually needs large number of iterations.\n\n", " The authors responded that \"The role of the evolution strategy is to ensure that the generator and its corresponding X are trapped near the Pareto front\". This is not intuitive to me. What if we replace ES with other search methods? Because the title of this paper is called \"generative evolution strategy...\", it is expected to see clearly why ES here is a must, not an alternative.\n\nIn this regard, the Ablation study can be the GEO with ES replaced by other search methods like Bayesian Optimization, or PSO.\n", " We recognize that there may be confusion because we provided too little information in the first submission.\nIn the revised version (and supplement), we specified the definition of Pareto efficiency, the description of ranks and sorting.\n\nThank you again for your help.\n", " Based on my reading of the LEMONADE paper, they are using the term correctly. When they say \"Pareto Front\" they are referring to the strict definition of \"Pareto Front\" and are not referring to any type of sorting.\n\nThey are not sorting/ranking, but directly filtering everything that's not on the pareto front. Hence, they do not mention non-dominated sorting, nor do they use the terminology regarding \"rank n\" that the authors bring up. They precisely define exactly what a pareto front is, and do not describe sorting or ranking because they do not perform sorting or ranking. \n\n>LEMONADE maintains a population P of parent networks, which we choose to comprise all non-dominated networks\n\nIn fact, they explicitly say that they don't consider any points that are dominated\n\n>One could also include some dominated architectures in the population to increase diversity, but we do not consider this in this work\n\nIndeed Long et al. mention the concept of ranks of Pareto fronts. However, note that:\n\n1. They define precisely what they mean when they write that. \n\n2. In the original version of this paper, the authors did not mention \"ranks\" of pareto fronts\n\nFurthermore, the fact that, as the authors mention, Tian et al. us the term differently, means that it is quite important to specify exactly what is meant by \"sorted and selected by Pareto Efficiency\".", " Thank you for your comments\n\nYour comment is that we should have been more detailed about the Pareto efficiency sorting method, for a better understanding of readers.\n\nOf course our intention was the non-dominated sorting.\nSince the concept of Pareto-efficiency is widely used even in autoML, we thought that it would give a simple and clear explanation of our intention to readers.\n\nIn general, (rank n of) Pareto-front, Pareto-efficiency, Pareto-frontier … are used as the same term of (rank n of) non-dominated sorting.\n\nA more precise explanation is that \"Pareto-front\" means the result, and \"non-dominated sorting\" means the method to get Pareto-fronts. The difference between non-dominated sorting methods is the computational complexity. However, the result of methods should be exactly same : Pareto-fronts.\n\nIn short, whichever method is used, the Pareto-front result must be the same, so we skipped the description of the method.\n\n---\nLEMONADE [1] is an early study of autoML. \nIn the pseudo code, they simply describe calculation of Pareto-front\n\nP <- ParetoFront(P U N)\n\nwith a simple definition of Pareto-front.\n(And they did not describe what method they used.)\n\n“Non-dominated sorting methods for multi-objective optimization” [2] is a method for a fast calculation of non-dominated sorting. In this paper, they also simply use the term Pareto front as the result of non-dominated sorting.\n\nHowever, in the “Effectiveness and efficiency of non-dominated sorting for evolutionary multi-and many-objective optimization” [3], they often use the term Pareto-front as a ground state of the target function. It is because the goal of this study is the performance comparison of W/ non-dominated soring and W/O non-dominated sorting. (The conclusion is that non-dominated sorting is very important.)\n\n---\nWe agree that it would be much better to explain details of method for a better understanding.\nWe will add more explanations about pareto efficiency and sorting techniques in the revision (and supplement) as soon as we can.\n\nThank you again for comments.\n\n[1] Elsken, T., Metzen, J. H., & Hutter, F. (2018). Efficient multi-objective neural architecture search via lamarckian evolution. arXiv preprint arXiv:1804.09081.\n\n[2] Long, Qiang, Xue Wu, and Changzhi Wu. \"Non-dominated sorting methods for multi-objective optimization: review and numerical comparison.\" Journal of Industrial & Management Optimization 17.2 (2021): 1001.\n\n[3] Tian, Ye, et al. \"Effectiveness and efficiency of non-dominated sorting for evolutionary multi-and many-objective optimization.\" Complex & Intelligent Systems 3.4 (2017): 247-263.\n", " ## On Pareto Efficiency\n\nI believe I phrased my question regarding Pareto Efficiency poorly. I'm familiar with Pareto Efficiency, but not with the author's usage of the term in the paper. In the code and examples given (and in its most well-known form) Pareto Efficiency/the Pareto Front is a binary notion. A point either is or isn't Pareto Efficient (on the Pareto Front). However, when the authors mention \"sorting by Pareto Efficiency\" and use it in the pseudocode, they are not referring to a binary notion. \n\nIn the response here, the authors mention a specific method of sorting the points by iteratively calculating and removing the Pareto front. My understanding is that this is a form of Non-Dominated Sorting. Neither the term \"nondominated sorting\", nor even a description of this sorting method, are mentioned anywhere in the paper. This seems to be a pretty important implementation detail for describing and re-implementing the algorithm. There are many entire papers dedicated to different ways of doing this sorting, including these fairly recent ones [1, 2]. \n\nIt seems to me that the author's usage of the term \"Pareto Efficiency\" is incorrect. If someone were to read the paper and pseudocode and implement this, they would likely take the term \"ParetoEfficiency\" in the pseudocode and just apply a binary fitness of whether the point is on the Pareto Front (since that is what Pareto Efficiency is) rather than the sorting method the authors mentioned in this response. Furthermore, because the authors do not mention any details of this sorting in the paper or supplementary material (or even use the term \"nondominated sorting\"), it would not be possible to properly reproduce the algorithm and results with confidence.\n\n[1] Long, Qiang, Xue Wu, and Changzhi Wu. \"Non-dominated sorting methods for multi-objective optimization: review and numerical comparison.\" Journal of Industrial & Management Optimization 17.2 (2021): 1001.\n\n[2] Tian, Ye, et al. \"Effectiveness and efficiency of non-dominated sorting for evolutionary multi-and many-objective optimization.\" Complex & Intelligent Systems 3.4 (2017): 247-263.\n", " There were many changes in the revision.\nIn particular, the algorithm description has been improved.\nAdditional explanations were added according to the comments of the reviewer, and unnecessary explanations were removed.\n\nThe background of the Cartpole-V1 experiment and the purpose of the LeNet-5 experiment were further explained.\n\nData has also been added.\n\nthank you", " About real-world problems\n\nIn the Q1 and Q5, I explained what \"real-world problem\" is. \n\nUnlike academia, companies are reluctant to disclose their technology. It was hard to write specific stories about real-world problems in the paper. This is why we only used the test functions, widely known.\nI believe you will understand that GEO was developed to solve empirical problems.\n\n---\nThere were many changes in the revision.\nIn particular, the algorithm description has been improved.\nAdditional explanations were added according to the comments of the reviewer, and unnecessary explanations were removed.\n\nThe background of the Cartpole-V1 experiment and the purpose of the LeNet-5 experiment were further explained.\n\nData has also been added.\n\nthank you", " The purpose of the LeNet-5 experiments.\n\nThis experiment is not for baseline comparisons.\n\nIn the related work L-GSO, authors mentioned that the performance of L-GSO is better in sub-manifolds.\n\nBecause GEO's generator-critic work flow is similar to L-GSO, we guessed that the performance of GEO would be better in a low-dimensional manifold.\nAlso, we thought that image generating problem is the simplest way to show manifold structures. Therefore, we expected that GEO will generates blurred image of numbers (because a smooth image is a low-dimensional manifold structure).\n\nHowever, results seem to be noise shape, not manifolds. There must be a difference between GEO and L-GSO, but we do not clearly find what makes the difference.\n\n---\nAbout RL papers\n\nI'm reading the paper you gave me, and it's interesting. And I see that the concept of evolution is partially related.\nBut as far as I understand, it is about reinforcement learning.\nRL and black-box optimization have completely different problem definitions. (ex. I/O structure of environments)\nTherefore, it is inappropriate to compare RL and black box optimizations.\n\nTo be clear, black box optimization and RL have completely different problem structures and should be clearly distinguished.\n\n---\nFYI, why did we used Cartpole-v1 toy model?\n\nLet me explain the reason for the Cartpole-v1 experiment.\n\nThe real-world problem where GEO is used is the evaluation of electronic device performance.\nThe target function is a Verilog device simulation.\nSince this simulation has a time-sequential I/O structure, it looks like it could be optimized with RL.\n\nThe problem, however, is that the cost of the simulation is too high to observe all of the time-sequential data.\nIn practice, it is only possible to observe the final score, which is the sum of the rewards.\nThen, the I/O structure is not time-sequential structure anymore.\n\nTherefore, this problem should be solved by black box optimization techniques, not RL.\nThe Cartpole-v1 experiment is a toy model for this situation.\n\n---\nThere were many changes in the revision.\nIn particular, the algorithm description has been improved.\nAdditional explanations were added according to the comments of the reviewer, and unnecessary explanations were removed.\n\nThe background of the Cartpole-V1 experiment and the purpose of the LeNet-5 experiment were further explained.\n\nData has also been added.\n\nthank you", " Ps.\nAlthough we explained the reason why we did not test algorithms you suggested, we agree that it is also important to show data as much as we can. \nThe revision includes more data and related studies based on your comments.\n\nThank you\n\n---\n1. Additional answer. About the purpose of the experiments.\n\nIt seems to be there is a little misunderstanding of the purpose of this experiment.\nThe experiments in the paper are NOT intended to compare the performance of algorithms and determine which algorithm is SOTA.\nLet me explain the reason.\n\nLet’s consider we have a set of Bayesian optimization (BO), Reinforcement learning (RL), evolution strategy (ES).\n\nThe question is follows.\n\n“Is it possible to compare performances of BO and ES?”\n\n“Is it possible to compare performances of ES and RL?”\n\n“Is it possible to compare performances of RL and BO?”\n\nThe answer will be “No”\nIt is because the goal of BO and ES seems similar, but actually different. \n\nBO is designed to optimize black-box for the case of high cost functions. Meanwhile, ES is designed to optimize black-box for the case of low cost functions.\n\nIf the number of function calls N < 100, BO will be better in most cases. On the contrary, if N > 10,000, ES will be better in most cases. \n\nTherefore, It is not strange idea if we want to see how the performance of BO and ES changes in each dimension. However, if we try to compare performance between BO and ES, it will be very strange idea. Metaphorically speaking, this is equivalent to a performance comparison between a language model and an image model.\n\nThe algorithms of your comments are ES, and they have the goal which is :\n\n“Searching global optimum of black-box in low-dimensions”.\n\nAnd it is well known that ES optimization becomes very difficult when the dimension exceeds 100. (I think that you already know about it, because in the materials you gave, most of the test dimensions are less than 100.)\n\nIn the paper (and supplement), we show that GEO has much worse performance in low-dimension.\nIt is because we designed GEO to have the goal :\n\n“Searching local optimum in extremely high-dimension (~10,000)” (It is not low-dimension nor global optimum)\n\nAnd it is totally different from the goal of classical ES algorithms. Therefore, we thought that comparison between GEO and ES can be inappropriate, just like that of BO and ES.\n\nAlso, from the experiment, we wanted show you :\n\n“What goals GEO can achieve?. How they different?”\n\nGEO : Better in high-dimension, worse in low-dimension\n\nES: Better in low-dimension, worse in high-dimension\n\nIn summary, since the goal of GEO and [CMAES, NSGA2, NSGA3, MOEA/D, ...] are different, I see that the experiment to compare performance between them is unsuitable just like that of BO and ES. Because such experiments may give the false message that GEO is a better algorithm.\n\n---\n2. About experiments\n\nFor the above reasons, in the first submission, we thought that it was only important to show the trend of ES and GEO according to the dimensions.\n\nSince it is known that ES rarely works at high dimensions, we thought it was sufficient to show only NSGA2 (the best result found in the experiment).\n\nHowever, these attempts seem to make misunderstandings.\nSo, we've detailed the reasons for your comments, and we've incorporated them into the revision as well.\n\n---\n3. About SOTA of ES.\n\nThe performance of optimization algorithm is (very) strongly correlated to initial states, boundary conditions, target functions, and hyper-parameters of algorithm.\n\nIn the test function optimizations, researchers already have found the best combination to make SOTA. However, it is impossible in the real-world problems to find the best combination for an algorithm, because the real-world problems are very expensive in general.\nFor this reason, there is a risk that the (so-called) SOTA of optimization studies can be biased toward the test function.\n\nI think this is why the performance of the black box algorithm is not alway as we expected.\nIn my opinion, we have to be careful in saying what SOTA is.\n\n---\n4. About summary\n\nIn the paper, we suggested that input seed z can be random variable or constant. In the short summary of yours, you summarized that we have used latent z, but we did not told that z is a latent vector.\n\nIt is not make sense that z is both a latent variable and a constant. In the paper, we explained that most of GAN algorithms feed random latent variable, but some GAN algorithms do not. Likewise, we also do not see z as a latent vector, we see it a simple input feed.\n\n---\n5. Typo correction: I found [while n=N], but it is [while n<N] or [for n in range(N)]. I’m really sorry for confusing you\n", " We really appreciate for your valuable review. The revision has taken your comments into consideration.\n\n---\nWe would like to correct a minor misunderstanding of your summary, and explain why we did not test the algorithm you suggest.\n\n---\nA.1 About high-dimensional optimization algorithms [SEP-CMA, VD-CMA, LM-CMA].\n\nI understand they are developed for high-dimensional problems.\nHowever, they are designed to optimize (high-dimensional) convex functions, not (high-dimensional) non-convex functions.\n\nThe optimization of convex and non-convex function is totally different, because convex functions can be easily optimized by gradient descent method. \nMost of ML algorithms uses modern gradient descent methods (such as Adam optimizer). They can optimize target functions in million, billion, and trillion dimensions. However, we don’t say the them high-dimensional black-box optimizers.\n\nWe should distinguish convex problems and non-convex problems.\n\n---\nA.2 About surrogate assist models\n\nIn the section 1 and section 4, we mentioned that it is important to ensure O(N) (linear) computational complexity in high-dimensional optimization problems. \n\nSurrogate assisted models might be good approaches for the next generation optimization strategy, but their computational time is not linear as far as I know. To make sure this, we have recently rerun the experiments in haste.\nAs a result, we reconfirmed that the models do not have linear complexity and incur enormous costs at high dimension or large function calls.\nIn the worst case, the time cost is too high, so we cannot optimize at all even in the low dimension (d ~ 100).\n\nIn short, computational complexity should be considered first, because in extremely high dimensions such as 10,000, enormous function calls are required.\nThis is a big difference between classical ES algorithms and surrogate assist models.\n\n---\nA.3. NSGA2, CMAES and the single-objective optimization.\n\nYou mentioned as if NSGA2 algorithm should not be used in the 1-objective problem. I understand your comment, as CMAES is known to outperform NSGA2 (in general). However, it is not always true.\n\n(In our last submission, we showed CMAES results in the supplement.\nIn this revision, we plotted the CMAES results in figure 3 and compared them with NSGA2.)\n\nAs you can see in the single-objective optimization experiment (Figure 3 and supplement), CMAES is not always better than NSGA2.\nIn the experiment (Styblinski-Tang), NSGA2 is the better one.\nIf we had used the Ackley function as a black box, CMAES would have performed better.\n\nThe performance of optimization algorithm is (very) strongly correlated to initial states, boundary conditions, target functions, and hyper-parameters of algorithm.\nFor this reason, it is impossible to guarantee that a particular algorithm is a better than others.\nAlso, in the practical problems (not just a test function), we have experience that NSGA2 is also a good optimization algorithm in 1-objective problems.\n\nThis is a reason why we used NSGA2 instead CMAES.\n\n---\nA.4 NSGA3, MOEAD, and others.\n\nFrom the early stage of study, we already tested [NSGA3, MOEA/D, …], but we concluded that there is no need to show all results in main paper. Let me explain why.\n\nNSGA3 and MOEA/D might be better than NSGA2 in low-dimension because NSGA3 and MOEA/D have reference direction assistances which NSGA2 does not have.\n\nHowever, in the extremely high-dimensional space, the performance between NSGA3 and NSGA2 becomes indistinguishable.\nIt is because they are easily trapped in local-optimum (near the initial state) when the dimension increases to [d >> 1,000]\n\nIn the revision figure 4 and 5, you can find that NSGA3 and MOEA/D often show worse performance than NSGA2.\nThis is why we plotted NSGA2 instead NSGA3 and MOEA/D in the first submission.\n\nIt seems clear that NSGA2 is better in this experiment, but we also added NSGA3 and MOEA/D for comparison.\n\n(I also agree that more data is better somehow.)\n\n---\nA.5. 2-objective optimization and CMAES\n\nHow to optimize 2-object with CMAES : A simple alternating method:\n\nOptimize f1 -> optimize f2 -> optimize f1 -> optimize f2 -> ...\n\nIt requires additional hyper param.\n\nBut, we agree that NSGA2, NSGA3 and MOEA/D could be better than CMAES in general (especially for low-dimensions).\nSo, we just corrected it in Figure 4 and 5.", " I really appreciate your valuable questions and advices\n\nTypo correction:\nI found [while n=N], but it is [while n<N] or [for n in range(N)].\nI’m really sorry for confusing you\n\n---\nQ1.\n\nA1.1. Where random z and constant z used?\n\nFigure 4 is non-stochastic black-box experiments. We used (pseudo random) constant seed z in it.\n\nFigure 5 is stochastic black-box experiments. We used random seed z in it.\n\nA1.2. About pseudo random “constant“ :\n\nThe seed z is initialized with random, after the initialization, we regard z as a constant.\n\nIn the most of GAN algorithms, the seed z is random variable. (StyleGAN uses constant z as its seed of CNN)\nIt is because the generator of GAN must generate various images with a single generator. They regard seed z as a latent space, and create multi images using the latent space.\n\nHowever, GEO has a pool of generators. There is no need for generators to have latent space. Because the parameter of generator always changed by critic, and the mutated generator always creates a new output X. \n\nAlso, the goal of optimization is not generating various outputs. The goal is finding Pareto-fronts. Therefore, we don’t have to regard z as latent space, and the seed doesn’t have to be random.\n\nA1.3. The reason why we used random & constant seeds.\n\nWe can always use random seed z. Instead, we have to refresh the pool periodically. The pool refresh process makes computational time almost doubled. Age-evolution is another option but, it makes the performance degraded. Therefore, we used constant z when we can. However, in the stochastic black-box, we have to refresh the pool regardless of z. So we used random z in Figure 5.\n\nA1.4. How would optimizing G be any different than optimizing \"x\" directly?\n\nIn the paper, we show that the depth of the generator is important.\nWe also proved it by the experiment \"1-layer generator test\".\nIn the related works section, we introduced GLOnet to emphasize the importance of the deep generator.\nAlso, we explained the reason.\nWe guess that the good result comes from the manifold searching ability of generative models.\n\n---\nQ2.\n\nA2. This question is the same as Q1.\n\nIf we use random z : stochastic generator (Figure 5)\n\nIf we use constant z : non-stochastic generator (Figure 4)\n\nAs I mentioned in A1.2, we don’t need latent space. It is because the goal of optimization and GAN are different. (see A1.2)\nThe constant z method will have trouble only when we need “inference”. But I don’t find any reason to have inference.\n\n---\nQ3. How to measure Pareto efficiency\n\nA3. \n\nA simple example of Pareto-efficiency (a case of maximization of a 2-objective function).\n\nLet ($f_i$, $g_i$) ∈ S\n\nIf there is no ($f_j$, $g_j$) in S that satisfies $f_j$ > $f_i$ or $g_j$ > $g_i$\n\nThen, ($f_i$, $g_i$) ∈ P, where P is the Pareto front.\n\nAnd how to sort : \n\nDef : P = Pareto(S), where S is an original set, P is Pareto front subset of S.\n\nP1 = Pareto(S)\n\nP2 = Pareto(S-P1)\n\nP3 = Pareto(S-P1-P2)\n\n…\n\nSort : subsets in order [P1, P2, P3, …]\n\nFYI.\n\nPareto efficiency is very famous concept, so you can easily find codes in Google.\n\nThe python code that I recommend you : https://stackoverflow.com/questions/32791911/fast-calculation-of-pareto-front-in-python\n\nIt will be easy to understand.\n\nI also recommend to read very simple explanations:\n\nhttps://www.statisticshowto.com/pareto-efficiency/\n\nhttps://www.economicshelp.org/blog/glossary/pareto-efficiency/\n\n---\nQ4.A. \n\n\"Pareto-efficiency\" is the method. The Pareto-efficiency is a widely used method for evaluating and optimizing multi-objective targets.\nIn the paper, we mentioned that we used Pareto-efficiency for sorting of scores.\n\nA simple maximize or minimize in 1-object is also a Pareto-efficiency of a 1-object.\n\n\"Take min, or sum\" is not a multi-objective optimization. \n\n---\nQ4.B.\n\nIn the related work EGAN, they have dataset. The dataset is called true data in GAN concept. However, optimization problems does not have true data. \n\nIn other words, GAN has true data from the start point, while optimization algorithms does not. It is the meaning of “without prepared data”.\n\n---\nQ4.C.\n\nThe related work L-GSO is an optimization method for 1-object. In the paper, they used a local-sampling method. The center of local sampling is determined by “current point” (currently optimized point), and it is exactly same with the Pareto front of 1-object.\n\nThe dimension of Pareto front = N-1, where N-object blackbox\n\nTherefore, the local sampling method has only one sampling region.\nHowever, if N >=2, there will be numerous sampling centers since the dimension of Pareto front is N-1.\n\nThis is why we cannot use the local sampling method when N>=2, and this is why training data should be in global region. (Pareto front is not a single point anymore)\n\n---\nQ4.D.\n\nRandom seed is NOT fixed. \nWe tried to use a stochastic environment, because we thought that a stochastic environment is better to evaluate performance of optimizers. (by assuming a very difficult problem)\n", " I really appreciate your valuable questions and advices\n\n---\nQ1. I had a hard time to understand the Algorithm 1 on page 3. What is N and M? Should \"while n = N \" be \"while n < N\"?\n\nA1.1\nTypo correction: I found [while n=N], but it is [while n<N] or [for n in range(N)].\n\nI’m really sorry for confusing you\n\nA.1.2\n\nMeaning of N, M : \n\nN: The number of critic networks == The number of target objects\n\nM: The number of mutation per 1 target object (1 critic network)\n\nM is a hyper-parameter of the evolution strategy, you can choose how many mutation will be.\n\n---\nQ2. In Figure 3, what is BO package used\n\nA2.\n\nIt is an unpublished package of my company. \nBasically, BO is a Gaussuian process based Bayesian Optimization method. The special trick of the package is using evolution algorithms for fast estimation of acquisition function. By tuning the acquisition function searching policy, we can control performance of BO. Still, it is a Gaussian process based BO, so it has O(N^3) computational complexity as other GP-BO does.\n\nSince the Guassian process is the core algorithm, the auxiliary evolutionary algorithm does not decisively affect the computational complexity and performance of GP-BO.\n\n---\nQ3. In Figure 5, what does The noise is restored for visualization mean?\n\nA3.\n\nIn figure 5, \n\nG : generator\n\nF : Black-box (non-stochastic)\n\nX = G(z), where z is a seed.\n\nWe defined non-stochastic experiment as\n\nScore = F(X)\n\nAnd we defined stochastic experiment as\n\nScore = F(X) + N, where N is random normal distribution.\n\nFigure 5-b) is plot of F(X), where X\n\n5-a) is plot of F(X) + N \n\nMeaning of restored :\n\nBecause the pool size is too small, it does not fully illustrates overall distribution of noise N. Because we wanted show the range of X that GEO see in the several iterations, we visualized it by adding more noises onto F(X).\n\nTherefore, 5-b) has meaningful information, 5-a) is an auxiliary figure for understanding.\n\n---\nQ4. Why does GEO fail to find the Pareto front for ZDT2?\n\nWe guess the collapsing problem comes from the shape of Pareto front.\n\nZDT1 has a convex Pareto front shape. ZDT3 has a partially concave, partially convex Pareto front shape.\nHowever, ZDT2 has only a concave Pareto front shape.\n\nIn the evolution stage, the sorting algorithm (sorting by pareto efficiency) tends to sample edge state first if the Pareto front has concave shape. Therefore, only edge states survive after few iterations.\nWe tried to solve this problem, but we do not find any efficient method that do not harm optimization performance.\n\n---\nQ5. Why the generated images cannot be recognized but still have high scores?\n\nWe guess it is almost like the situation where we can see in “Adversarial attack”.\n\nAdversarial attack : https://pytorch.org/tutorials/beginner/fgsm_tutorial.html\n\nIn the adversarial attack, we can change the prediction of neural network by adding noise-like image onto original image.\nWe think that the generated image of Figure-6 is almost like an adversarial attack on white back ground image.\n\n---\nC1. For generative neural networks, stabilizing training often is very challenging, and training can be expensive.\n\nYes, they can be. However, black-box optimization studies have a different point of view.\nWe assume that making O(N) computational complexity is one of the most important goals. Also, in the high-dimensional optimization problems, making O(N) complexity is much more important since they require a lot of black-box function calls.\nThe training cost is the 2nd point to be considered.\n\nAlso, what we have done is the development of a stabilizing method. Training is not that expensive because we found a practical method (evolution) to stabilize generative model in optimization.\n\n---\nC2. Due to the complexity of generative neural networks, it will be difficult for the proposed method to scale to handle high dimension problems.\n\nYes. GEO is strongly depends on performance of GPU. We mentioned that this problem mainly comes from memory consumption of the attention network. We are trying to find other efficient network structures.\n\nStill, GEO has much stronger performance in searching extremely high-dimensional space than previous methods.\n\n---\nC3. Picture organization and printing problems.\n\nPaper printing problem :\nI wrote this paper in Overleaf, a verified paper writing site, so it is difficult to answer why you are having printing problems.\n\nI have experienced inconsistency between PDF readers (Adobe, Chrome, …) and between OS (Mac, Windows, …) many times. I guess OS or APP inconsistency is the problem.\n\nPicture positions :\nIn other science field, I have read so many papers that arrange 1-Figure / 1-page style. So I thought it is okay to follow 1-Figure / 1-Page style.\nI would be better to change writing style next time. Thank you for your advice.\n", " Thank you for your sincere advice. Your questions get to the point.\nI had to explain my idea in more detail.\n\n---\nQ1. Name of real-world applications\n\nThe real-world problem is searching Best/Worst performances of electronic-circuit designs. Specifically, they are Verilog (device) simulators. GEO was developed to solve this problem.\n\nThe Best/Worst performance of a new device must be evaluated before mass production. If we find unexpected bad performance after mass production, it would be a disaster.\n\n1. Verilog simulators predict performance of electronic chip on multi targets (Power, Latency, …)\n2. Generally, the input is a sequential command of length 1,000~10,000\n3. Target scores have random noise\n\nIt is a stochastic multi-objective high-dimension problem.\n\n---\nQ2. How evolution strategies contribute to GEO?\n\nIn the early development of GEO, we did not try evolution strategies. However, we found that a simple generator-critic network model easily diverges to INF. Also, it has no optimization performance at all.\n\nQ2.1 Why evolution stabilize critic networks?\n\nThe divergence is a result of evil cycles of two networks.\n\n1. The generator suggests an input variable X in a wrong direction\n2. The critic network is trained with input variable X, but X has no information of a Pareto front\n3. The critic network trains the generator, but it does not give meaningful information\n\nThe role of the evolution strategy is to ensure that the generator and its corresponding X are trapped near the Pareto front. Near the Pareto front, the set of X can be the best training region for the critic network.\n\nQ2.2. Ablation study\n\nWO evolution, it diverges from the start, and it has no optimization performance. We thought that performance comparison is impossible.\n\t\nIn the related studies (L-GSO : local sampling method, COMs : regularization method), they did not show ablation test, but we guess that they also had similar problems.\n \nStill, we agree that it is much better to show ablation. It should be added in the supplement to show how the model diverges fast.\n\n---\nQ3. How generative model generates candidate solutions?\n\n1. A generator is randomly sampled from the pool\n2. The critic network trains the selected generator (using backpropagation, to increase prediction of critic)\n3. The trained generator suggest a new variable X=G(z)\n4. Check score = F(X)\n5. Sort a new (G, score) set in a pool\n\nBecause we tried to explain the algorithm with a schematic figure and a pseudo-code, we wanted to avoid too much redundancy. However, if you were hard to understand, it would be my fault not to make an easy explanation. \n\n---\nQ4. What is network structure and hyper-parameters?\n\nThe network structure is a stack of self-attention models. It is a modification of original “Transformer” model. The overall structure is illustrated in Figure 2 and Figure A.1 (supplement)\nHyper-parameters of the self-attention model can be found in the supplement.\n\nIn the answer of Q1, we mentioned that the real-world problem has sequential input structure. This is another reason why we used attention mechanism for networks.\n\n---\nQ5. For Cartpole-v1, why not compare GEO with RL. Also, it would be much better to briefly introduce what is scenario\n\nCartpole has time sequential I/O structure like\nInput : {Action_0, … Action_t, …}\nOutput : {(State_0, Reward_0), …, (State_t, Reward_t), …}\n\nIf RL can observe all the output information, RL will be much better to solve Cartpole\n\nHowever, my scenario is :\nWhat if we cannot observe time sequential data because the cost of simulator is too high?\nWhat if the observable information is just a single final score? (ex. Score = sum of all rewards)\nThen, RL cannot be used to solve this problem.\n\nAs I mentioned in Q1, The real-world problem is optimizing Verilog simulation and it is very heavy.\nSome people believe that RL can find Best/Worst case of electronic devices because simulators have sequential I/O structures.\nHowever, this problem is not that simple.\n\nLet’s consider simulation time is 1h, and steps are 10,000.\nThen, the total time will be 10,000h just for one iteration.\n\nWe thought that a black-box optimizer is the only solution for this scenario, so we developed GEO.\n\nThis is why we tested Cartpole as a toy model.\n\n---\nQ6. What does section 4.4. (image generation test) want to show about?\n\nIn the related work L-GSO, authors mentioned that the performance of L-GSO is better in sub-manifolds.\n\nBecause the generator-critic work flow is similar to L-GSO, we guessed that the performance of GEO would be better in a low-dimensional manifold.\n\nAlso, we thought that image generating problem is the simplest way to show manifold structures.\nTherefore, we expected that GEO will generates blurred image of numbers (because a smooth image is a low-dimensional manifold structure).\n\nHowever, results seem to be noise shape, not manifolds.\nThere must be difference between GEO and L-GSO, but we do not clearly find what makes difference.\n\n", " The paper presents a new black-box optimization method called GEO. It is designed for solving stochastic and multi-objective black-box large-scale optimization problem. \n\nGEO has two stages: evolution and surrogate/critic network training. The evolution pool stores a number of generators ranked by fitness scores. The backprogragation from the critic network is the mutations generators. \n\nWhen comparing with baseline methods including L-GSO, BO, CMA-ES and NSGA-II, GEO shows competitive results in optimization functions, finding Pareto fronts, etc. The strengths of the paper include:\n1) It proposes an interesting method that utilizes generative neural networks and evolution strategy. Evolution helps to stabilize the training of critic networks while the critic networks provide efficient mutations. \n2) Experiment results are better than baseline methods in many cases.\n\nThe weaknesses of the paper include:\n1) For generative neural networks, stabilizing training often is very challenging, and training can be expensive. \n2) Due to the complexity of generative neural networks, it will be difficult for the proposed method to scale to handle high dimension problems. \n3) The organization of the paper can be better. The pictures and algorithm should be close to their description pages. The pictures' quality is not high. I failed to print the paper using two different printers. There are something wrong with the format to this paper. I had a hard time to understand the Algorithm 1 on page 3. What is N and M? Should \"while n = N \" be \"while n < N\"?\n\nIn Figure 3, what is BO package used? \n\nIn Figure 5, what does The noise is restored for visualization mean?\n\nWhy does GEO fail to find the Pareto front for ZDT2?\n\nWhy the generated images can not be recognized but still have high scores?\n I think the authors adequately addressed the limitations and potential negative societal impact of their work.", " The authors introduce a new black-box optimization technique that works for multi-objective, high-dimensional functions. They do this by learning a critic function and a population of \"generator\" functions. The outputs of the generator are used to train the critics. The generators backpropagate through the critics in order to improve their scores. The use of evolution strategies is key to stabilizing the training of the critics. Originality:\n\nThe paper seems to be original, though I'm not extremely familiar with the SOTA on multi-objective high-dimensional blackbox optimization. \n\nQuality:\n\nThe paper has interesting experiments. However, there are many simple baselines that seem relevant for blackbox optimization of high-dimensional algorithms. In particular, some well-known works include:\n\nSalimans, Tim, et al. \"Evolution strategies as a scalable alternative to reinforcement learning.\" arXiv preprint arXiv:1703.03864 (2017).\n\nSuch, Felipe Petroski, et al. \"Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning.\" arXiv preprint arXiv:1712.06567 (2017).\n\nSehnke, Frank, et al. \"Policy gradients with parameter-based exploration for control.\" International conference on artificial neural networks. Springer, Berlin, Heidelberg, 2008.\n\nIn particular, OpenAI ES and PGPE are similar to this approach in that they estimate a gradient. These papers could be a more fair comparison since the baselines used in the paper are notoriously poor at high-dimensional optimization. Furthermore, the methods are fairly unsophisticated, making them easy to compare to.\n\nFurthermore, I'm not sure what the purpose of the LeNet-5 experiments is, seeing as there is no baseline comparison. \n\nClarity:\n\nThe paper is poorly structured and written to the point where it is quite difficult to understand. See the \"Questions\" section for the many points of uncertainty. \n\nThere are numerous grammatical errors that make it difficult to read. In particular, the paper often either misuses or is missing articles (words like \"the\" and \"a\").\n\nThe pseudocode uses \"while n=N\", which is not at all clear. I'm assuming that they mean \"for n $\\in$ N\". \n\nThe authors often use imprecise language, which can cause confusion. For example, in Line 119 they write about \"accidentally-high-scoring elements\". I'm assuming that they are referring to points that are high due to stochasticity -- however, calling those \"accidental\" is imprecise/incorrect.\n\nSignificance:\n\nThe approach is interesting and seems to perform well -- however, with the missing baselines and lack of clarity on the details, it is very difficult to evaluate its significance. 1. Line 107 the authors write: \"We experiment with both random variables and pseudo-random constants as input feeds z\". Where are the results? If it is in a plot or figure, please reference it (I could not find which one it is referring to). If they are not, please at least mention what the results of the experiments were. What do you mean by \"pseudo-random constants\"? How is it different from a \"random variable?\" Are the random variables somehow not generated by a \"pseudo-random\" number generator? If $z$ is constant throughout training, then is it really a \"generative\" method? How would optimizing $G$ be any different than optimizing \"x\" directly?\n\n2. Line 140 the authors write: \"We experimented with both stochastic and non-stochastic generators\". Again, I'm not sure where in the paper the results are. In particular, this is an interesting experiment: If non-stochastic generators worked, then this wouldn't really be a generative process. Can you get away with not using generators and instead just having a large population?\n\n3. How do you measure pareto efficiency in the paper? In the pseudo-code \"ParetoEfficiency\" is written. In the paper, on Line 117 the authors write that data are \"sorted and selected by Pareto efficiency\". How exactly is this calculated?\n\n4. Similar to question 3, there are simply many parts of the paper that are poorly explained, so much so that I cannot list out every question that I have. Here are a few:\n\nA. Line 19: I understand that there are multiple objectives to optimize, but how are these objectives combined? The authors simply write \"Optimize\" without explanation. Do you take the min? Do you take the sum? If they're combined, how is it different from a single objective? Or instead, are you looking for multiple $X$ / trying to find the best $X$ for each of the $m$ functions? I assume you take the min, but this is not made clear in the paper.\n\nB. Line 95: \"GEO uses only searched points without prepared data\". I think I have a good idea of what this means, but it is not made explicitly clear at all. What do you mean by \"prepared data?\"\n\nC. Line 113: \"The critic network learns variable $x$ in a global region. Global training is essential\". What does this mean? What would be the alternative approach? Again, while I think I have a decent guess of what the authors are trying to convey, it is not completely clear.\n\nD. In Cartpole-V1 is the random seed fixed? My understanding is that cartpole's initialization is random, so observing the state is required. The authors seem aware of the limitations of the work. They mention the \"Pareto front collapsing problem\" and the \"GPU memory consumption problem\".", " This paper proposes a combination of deep generative model and evolution strategy, targeting high-dimensional multi-objective black-box optimization. The approach consists of N neural networks, called critic networks, where N is the number of objective, and p generative models, from which solutions to the original problem are generated by giving latent vectors z. The critic networks are trained to approximate the objective functions and the generative models are trained on the critic networks to generate a better solution. Literature Review:\n\nThough this paper targets high-dimensional black-box optimization, there is almost no literature review on this topic. For example, in evolution strategies, which is the basis of the proposed approach, different approaches have been proposed for high-dimensional problems, such as SEP-CMA [1], VD-CMA[2], LM-CMA[3], etc. \n[1] A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity, PPSN 2008\n[2] Comparison-Based Natural Gradient Optimization in High Dimension, GECCO 2014\n[3] Lm-cma: An alternative to l-bfgs for large-scale black box optimization, Evolutionary Computation (2017)\n\nMoreover, though one of the contributions of this paper is a surrogate-assisted evolution strategy, no literature review has been performed on this line. See for example [4, 5] and references therein.\n[4] A global surrogate assisted CMA-ES, GECCO 2019.\n[5] pysamoo: Surrogate-Assisted Multi-Objective Optimization in Python, arxiv (2022)\n\nMulti-objective optimization has also not been reviewed. \n\nExperimental Evaluation:\n\nThe baseline approaches are very limited, as the literature review has not been performed well. NSGA-II is a traditional GA based approach for mainly bi-objective optimization. A lot of improvements are reported. Surprisingly, it has been applied to a single objective optimization in Figure 3, which is not understandable. CMA-ES is basically an approach for single objective optimization, and there are several extensions of CMA-ES to multi-objective optimization, but it has not been explained which algorithm is used. \n\nNo surrogate-assisted evolution strategies or other surrogate-assisted approaches are compared in the experiments. No state-of-the-art evolutionary multi-objective optimization approaches such as MOEA/D or NSGA-III are compared. I suggest you to look at BBOB BiObjective Testbed and algorithms reported there (https://numbbo.github.io/data-archive/bbob-biobj/).\n\nBecause of the lack of the literature review and comarison with the state-of-the-art approaches, I can not judge the goodness of the proposed approach.\n The algorithm description looks strange to me. What do you mean by “while n = N do” and “while m = M do”?\n See the comment above. The lack of literature review and the lack of experimental comparison.", " This work proposed a new Generative Evolutionary Optimization method combined Evolutionary Strategies and Generative Neural Network, aiming to address stochastic multi-objective high-dimensional optimization problems. The proposed GEO is verified on a set of benchmark optimization functions (both single-objective and multi-objective), and a black-box optimization of neural networks in reinforcement learning and image recognition. The reviewer did not see similar work for addressing the stochastic multi-objective high-dimensional optimization problem. Though through related works, there are L-GSO, EGAN and other works, the authors clarify their differences inbetween.\n\nThe organization of the technique part of this manuscript should be considerably improved. For the reviewer, it is quite difficult to get how GEO exactly works and the introduction of sub-routine techques are quite vague, e.g., how generative model generates candidate solutions. Besides, the empirical studies section is also problematic. It is unclear why Cartpole-v1 and LeNet-5 are used for verification and how to fairly assess the quality of the GEO based on them. For example, why not compare GEO with RL methods on Cartpole-v1? What is Cartpole-v1? How representative it is? Why we have to regard it as a black-box optimization problem if some RL methods can solve it better with MDP structure. And it is also unclear what information has been delivered to the readers in section 4.4.\n\nThough this work focuses on the stochastic multi-objective high-dimensional optimization problem, it is not clear what real-world problem is with these features and there seems no empirical studies on the problem with all these features. Thus, it is difficult to judge how significant the proposed GEO is. 1. Please name a few real-world applications involving stochastic multi-objective high-dimensional optimization problems. The current experiment studies seem to consider 4 benchmarks each touching one or two aspects of the above problem features. It is interesting to see an empirical case study on the stochastic multi-objective high-dimensional optimization problem.\n\n2. How Evolution Strategies contribute to GEO? Section 3.4 states that ES can help stablize the training, but why is this case? Please clarify this part. Besides, the ablation studies of verifying the importance of ES in GEO is missing. \n\n3. How generative model generates candidate solutions? The review suggests first introducing the general framework of GEO at the begining of section 3, so readers can easily get how GEO works. For the current manuscript, it is hard for those whoe do not farmiliar with L-GSO and EGAN to understand the GEO, even for the basic idea of it.\n\n4. What is network structure and hyper-parameters of the generative models and critic models?\n\n5. For Cartpole-v1, why not compare GEO with RL methods if RL methods perform better? Also, it would be much better to briefly introduce what is Carpole-v1 scenario.\n\n6. It is not intuitive what does section 4.4 want to show about. \n\n\n\n No. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 2, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4, 5 ]
[ "9Qp49S7q6ke", "g8BswzUN727", "R9av0CHDtFk", "93Hq1E1ihes", "YpCyGK0O7EF", "YpCyGK0O7EF", "3y_j2y_hJS", "8Wl0CqN2cQz", "_Ys0WKRx4J3", "iAhth3NqgT1", "GL55imJ7D-A", "YpCyGK0O7EF", "iAhth3NqgT1", "78PKyQx30TJK", "mQZJRhuqww8", "QKlduAyTPTW", "RItkzVUquQ_", "bUbqRPGHKq", "nips_2022_2nYz4WZAne4", "nips_2022_2nYz4WZAne4", "nips_2022_2nYz4WZAne4", "nips_2022_2nYz4WZAne4" ]
nips_2022_q0XxMcbaZH9
Learning Equivariant Segmentation with Instance-Unique Querying
Prevalent state-of-the-art instance segmentation methods fall into a query-based scheme, in which instance masks are derived by querying the image feature using a set of instance-aware embeddings. In this work, we devise a new training framework that boosts query-based models through discriminative query embedding learning. It explores two essential properties, namely dataset-level uniqueness and transformation equivariance, of the relation between queries and instances. First, our algorithm uses the queries to retrieve the corresponding instances from the whole training dataset, instead of only searching within individual scenes. As querying instances across scenes is more challenging, the segmenters are forced to learn more discriminative queries for effective instance separation. Second, our algorithm encourages both image (instance) representations and queries to be equivariant against geometric transformations, leading to more robust, instance-query matching. On top of four famous, query-based models (i.e., CondInst, SOLOv2, SOTR, and Mask2Former), our training algorithm provides significant performance gains (e.g., +1.6 – 3.2 AP) on COCO dataset. In addition, our algorithm promotes the performance of SOLOv2 by 2.7 AP, on LVISv1 dataset.
Accept
This paper leverages dataset-level uniqueness and transformation equivariance to improve state-of-the-art instance segmentation methods. The reviews were overall positive about the submission: the reviewers especially highlighted the good experimental results, the relevance of the scene level query embedding and its complementarity with the equivariance constraints. The authors' feedback brings important answers to some reviewers' concerns. Especially, the new conclusive experiments on LVIS or the extension of the method for panoptic segmentation widens the approach's applicability and has been appreciated. Other answers in the rebuttal did not convinced some reviewers, and there remains issues about the novelty of the approach, the terminology and positioning with respect to 'query-based' approaches, or the extension of the method for photometric equivariance. The AC carefully read the submission. The AC considers that the idea of querying instances from the whole training dataset is interesting. Despite the limited contribution of the equivariance loss, which has been used in several related scenarios, the design of the whole approach in the context of instance segmentation is relevant and well designed. The experiments are also convincing. It is a pity that the authors did not take the opportunity to update the paper during the discussion period, especially the new experiments and the clarifications requested by the reviewers. Based on the relevance of the approach and its good experimental results obtained over various baselines in several datasets, the AC recommends acceptance. He highly encourages the authors to include the elements discussed in the rebuttal to improve the quality of the final paper.
train
[ "-BBrdp5JIr", "ibvfhf9PqGX", "V37CqIzoxap", "A1XXFSGDAhH", "mUwpqmRMjEz", "pEIfrRu8jOA", "z_EgHqToxBK", "dbmNjpoutGV", "F_zZIFHbDX2" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the response to my review and all other reviews, and I think authors covered the issues pretty well. I do believe adding the new results on the additional datasets will make the paper stronger. The newly proposed training paradigm is novel, and experiments support consistent significant improvements over popular models. Hence, I stay by my initial rating and propose to accept the paper.", " We thank the reviewer for the time and constructive feedback. We address the main questions below:\n\n\n#### **Q1. Too many equations and the notion of $\\hat{O}^{I'}\\_{n}$ and $\\hat{W}^{g}\\_{n}$.** \n\n**A1:** The definition of $\\hat{O}^{I'}\\_{n}$ and $\\hat{W}^{g}\\_{n}$ have been given in Line 176 and Line 240. We will try our best to improve our presentation and formulation. \n\n---\n\n#### **Q2. Show the iteration time and GPU memory cost in Table 3 (b) and (c) to show the trade-off.** \n\n**A2:** For Table 3 (b), as the external memory capacity is fixed, there is no variation in training speed or GPU memory cost across different sampling strategies.\n\nFor Table 3 (c), please find the updated version below:\n\n| capacity | AP | AP$_{S}$ | AP$_{M}$ | AP$_{L}$ | Training speed (minutes/epoch) | GPU memory cost (GB) |\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|- | 35.5 | 16.9 | 38.9 | 50.2 | 90 | 19.68 |\n| mini-batch | 36.9 | 17.8 | 40.1 | 53.1 | 106 | 19.68 |\n| 10k pixels | 37.0 | 17.9 | 40.3 | 53.2 | 91 | 20.24 |\n| 50k pixels | 37.3 | 18.1 | 40.4 | 53.6 | 93 | 22.37 |\n| 70k pixels | 37.5 | 18.5 | 40.4 | 53.7 | 93 | 23.72 |\n| **100k pixels** | 37.6 | 18.6 | 40.6 | 54.0 | 94 | 25.37 |\n\nAs seen, the optimal configuration of the memory capacity, *i.e.*, 100k pixels, only causes marginal training speed delay (~5%), as we mentioned in our manuscript.\n\n---\n\n#### **Q3. K-Net.** \n\n**A3:** K-Net will be cited and involved in the comparison. \n\n---\n\n#### **Q4. 'kernel-based' *vs* 'query-based'.** \n\n**A4:** Both the terms 'query-based' and 'kernel-based' are widely adopted terminologies in this field [18, 41, 42]. As the reviewer mentioned, MaskFormer and Mask2Former even use 'mask embeddings' (and we personally feel it is hard to say their mask embeddings are just dynamic kernels). We do not think this is a big issue. And our idea, clearly, is general for the query-matching-based instance segmentation paradigm. \n\n---\n\n#### **Q5. Photometric transformation.** \n\n**A5:** This is due to the nature of transformation-equivariance. We have clearly discussed this issue in Line 68 in the suppl.. As our algorithm arises from the restriction of equivariant transformations that have to be elements from a group of linear transformation operators. Therefore, we can only adopt ordinary linear transformation (*i.e.*, flipping and cropping); arbitrary transformation (*i.e.*, color jittering and blur) is not applicable for our algorithm. \n\nSpecifically, we encourage the transformation $g$ on feature representation to be ``equivariant against input imagery transformations'', which states:\n\n\\begin{equation}\n\\forall g\\in G: f(g(I))\\approx g(f(I))=g(\\textbf{I}).\n\\end{equation}\n\nFor color jittering and blurring, **no existing solution can obtain a non-trivial transformation $g$ to accommodate that the output representation changes in the exact same way with the same transformation $g$ applied to the input $I$**. \n\nBut your question points out our ongoing study: How to find out such a corresponding non-linear homomorphic transformation $g'$? \n\n---\n\n#### **Q6. Panoptic segmentation.** \n\n**A6:** Yes, our method is generic for query-based models, irrespective of instance or panoptic segmentation. For panoptic segmentation, the only modification is for the stuff classes; as the stuff classes are not instance-discriminative, the training objectives for cross-scene querying (Eq. 5) of stuff classes should be the groundtruth stuff masks, instead of all-zero matrices. \n\nBelow we further report additional experimental results on MS COCO Panoptic, on the top of Mask2Former. We can find that our algorithm improves PQ by **1.2**. \n\n| Method | Backbone | #Epoch | PQ | PQ$^{th}$ | PQ$^{st}$ |\n| :-: | :-: | :-: | :-: | :-: | :-: |\n| Mask2Former | Swin-B | 50 | 56.1 | 62.5 | 46.7 |\n| **+Ours** | Swin-B | 50 | **57.3** | **64.1** | **47.4** |", " We thank the reviewer for the time, and constructive feedback. We address the main questions below:\n\n#### **Q1. Relationship between Uniqueness and Equivariance.** \n\n**A1:** As we repeatedly mentioned in our manuscript, our contributions: dataset-level uniqueness and transformation equivariance, are orthogonal and respectively address two core properties of the query-based instance segmentation paradigm (Line 3, Line 39, Line 88, Line 158). Neither of them has been explored so far. And we empirically show that, uniqueness and equivariance both boost the performance (Line 330-335); as they are orthogonal, their integration achieves further better performance (Line 335-337). \n\n---\n\n#### **Q2. Equivariance-based augmentation.** \n\n**A2:** Sorry for this confusion. \n\nFirst, please note that our approach is not an equivariance-based augmentation technique but an equivariant representation learning strategy for any query-based segmenter. Concretely, traditional data augmentation methods only deal with input data transformation (input images and labels), with no constraint about the relation between the feature representations and queries produced from the transformed views. They just simply use the transformed images and annotations as **additional individual training examples**. In contrast, our approach establishes the valuable equivariance property of both query embedding and feature representation with respect to transformations (Line 109), which benefits per-instance description for mask generation. \n\nSecond, the key is NOT the input transformation; the key is the transformation equivariance property. We enforce the matching between the query and feature to be equivariant against input imagery transformations. This view is fresh and insightful. \n\nThird, traditional augmentation strategy in query-based instance segmentation can be viewed as a special case of our approach, *i.e.* Eq. (7). Please refer to Line 231-233 for more detailed discussion.\n\nForth, we provide extensive comparison with traditional augmentation strategy. In Table 3(d), the second row refers to the performance of traditional augmentation strategy. In Table S2 in the suppl., we provide very detailed comparison. These experimental results clearly and comprehensively demonstrate the advantage of our equivariance learning strategy over the common data augmentation technique. \n\n---\n\n#### **Q3. Format of cross-entropy loss and focal loss for $\\mathcal{L}_{inter\\\\_mask}$.** \n\n**A3:** Cross-entropy loss and focal loss, in our case, are in the formats of:\n\n\\begin{equation}\n \\frac{1}{H \\times W}\\sum^{H \\times W}\\_{i}-log(1-\\hat{O}^{I'}\\_{n,\\ i})\n\t\\quad\n\t\\text{and}\n\t\\quad\n \\frac{1}{H \\times W}\\sum^{H \\times W}\\_{i}-(\\hat{O}^{I'}\\_{n,\\ i})^{\\gamma} log(1-\\hat{O}^{I'}\\_{n,\\ i})\n\\end{equation}\n\nwhere $\\hat{O}^{I'}\\_{n}\\in[0, 1]^{H \\times W}$ refers to the $n$-*th* inter-image instance prediction mask for image $I'$ (Line 173-179), and $\\hat{O}^{I'}\\_{n,\\ i}\\in\\hat{O}^{I'}\\_{n}$. \n\n---\n\n#### **Q4. Limitation and social impact.** \n\n**A4:** We clarify we have discussed the limitation and broad impact in S4 in suppl.. ", " We thank the reviewer for the time and constructive feedback. We address the main questions below.\n\n#### **Q1. Uniqueness and robustness.** \n\n**A1:** Thanks for your careful review. Yes, here dataset-level uniqueness and transformation equivariance (Line 5) are referred to simply as ''Uniqueness'' and ''Robustness''; or, in other words, the desired ''Uniqueness'' and ''Robustness'' properties are achieved by addressing ''dataset-level uniqueness'' and ''transformation equivariance'' (Line 158). After reading your comments, we feel such statement may cause some misleading. We will rephrase related sentences. \n\n---\n\n#### **Q2. If the equivariance is estimated on the feature representation, what makes it different to traditional concepts, *e.g.* invariance?** \n\n**A2:** Sorry for this confusion. As we mentioned in Line 101-103, invariance is a special case of equivariance. For invariance, the feature representation is desired to NOT vary with the input transformation. That is to say, given a representation $f$ and a transformation $g$ for input $I$, invariance can be expressed as: $f(g(I)) \\approx f(I)$. Differently, the representation $f$ is said to be equivariant with $g$ if $f(g(I)) \\approx g(f(I))$. \n\nInvariance is not suitable for instance segmentation task, as the input transformation should cause an exact change (instead of no change) in the segmentation mask and feature map. This is also why we address equivariance -- it essentially addresses the very nature of this task. \n\n---\n\n#### **Q3. The evaluation is somewhat weak. Only COCO is adopted.** \n\n**A3:** We clarify that we follow the standard evaluation protocol in this field and test our algorithm on the top of SIX famous instance segmentation models, *i.e.*, CondInst, SOLOv2, SOTR, Mask2Former (in the main paper), SparseInst and SOLQ (in the supplementary), and different backbone networks, *i.e.*, ResNet-50, ResNet-101, and Swin-S/B/L. \n\n\nTo better address your concern, we conduct additional experiments on LVISv1 [ref1]. The results are listed below. On top of SOLOv2, our algorithm provides significant performance gain, *i.e.*, **2.7** mask mAP. Our experimental results on COCO and LVISv1 thoroughly demonstrate the power of our idea and the effectiveness of our algorithm. \n\n\n| Method | Backbone | #Epoch | AP | AP$_{50}$ |AP$_{75}$ |AP$_{S}$ |AP$_{M}$ |AP$_{L}$ |AP$_{r}$ |AP$_{c}$ |AP$_{f}$ |\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\n| SOLOv2 | Resnet-50 | 36 | 21.4 | 34.0 | 22.8 | 14.9 | 29.1 | 34.8 | 9.5 | 20.9 | 27.6 |\n| **+Ours** | Resnet-50 | 36 | **24.1** | **37.4** | **25.5** | **17.0** | **31.5** | **39.1** | **13.5** | **22.8** | **29.7**\n\n\nIn addition, our method is even generic for query-based models, irrespective of instance or panoptic segmentation. For panoptic segmentation, the only modification is for the stuff classes; as the stuff classes are not instance-discriminative, the training objectives for cross-scene querying (Eq. 5) of stuff classes should be the groundtruth stuff masks, instead of all-zero matrices. \n\nBelow we further report additional experimental results on MS COCO Panoptic, on the top of Mask2Former. We can find that our algorithm improves PQ by **1.2**. \n\n| Method | Backbone | #Epoch | PQ | PQ$^{th}$ | PQ$^{st}$ |\n| :-: | :-: | :-: | :-: | :-: | :-: |\n| Mask2Former | Swin-B | 50 | 56.1 | 62.5 | 46.7 |\n| **+Ours** | Swin-B | 50 | **57.3** | **64.1** | **47.4** |\n\n\n[ref1] LVIS: A Dataset for Large Vocabulary Instance Segmentation", " We thank the reviewer for the time and constructive feedback. We address the main questions below.\n\n#### **Q1. Show benefits on top of a good model. Eg, evaluation on LVIS.** \n\n**A1:** To address your concern, we conduct experiments on LVISv1. The results are listed below. On top of SOLOv2, our algorithm provides significant performance gain, *i.e.*, **2.7** mask mAP. Our experimental results on COCO and LVISv1 thoroughly demonstrate the power of our idea and the effectiveness of our algorithm. \n\n\n| Method | Backbone | #Epoch | AP | AP$_{50}$ |AP$_{75}$ |AP$_{S}$ |AP$_{M}$ |AP$_{L}$ |AP$_{r}$ |AP$_{c}$ |AP$_{f}$ |\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\n| SOLOv2 | Resnet-50 | 36 | 21.4 | 34.0 | 22.8 | 14.9 | 29.1 | 34.8 | 9.5 | 20.9 | 27.6 |\n| **+Ours** | Resnet-50 | 36 | **24.1** | **37.4** | **25.5** | **17.0** | **31.5** | **39.1** | **13.5** | **22.8** | **29.7**\n\n\nIn addition, our method is even generic for query-based models, irrespective of instance or panoptic segmentation. For panoptic segmentation, the only modification is for the stuff classes; as the stuff classes are not instance-discriminative, the training objectives for cross-scene querying (Eq. 5) of stuff classes should be the groundtruth stuff masks, instead of all-zero matrices. \n\nBelow we further report additional experimental results on MS COCO Panoptic, on the top of Mask2Former. We can find that our algorithm improves PQ by **1.2**. \n\n| Method | Backbone | #Epoch | PQ | PQ$^{th}$ | PQ$^{st}$ |\n| :-: | :-: | :-: | :-: | :-: | :-: |\n| Mask2Former | Swin-B | 50 | 56.1 | 62.5 | 46.7 |\n| **+Ours** | Swin-B | 50 | **57.3** | **64.1** | **47.4** |\n\n---\n\n#### **Q2. Mask2Former' results on COCO.** \n\n**A2:** Thank you so much for your careful review. Our previous re-implementation was based on mmdetection, which only released code for panoptic segmentation at the time of submission. When we modified and adopted the mmdetection version of Mask2Former to COCO instance segmentation, some hyperparameters were not changed and we actually ran fewer iterations compared with the original Mask2Former. With your reminder, we rechecked our code and solved this issue. Below are the new results. As seen, the scores of our reproduced Mask2Former are almost the same as the original ones, and our algorithm boosts the performance by **1.6** mask mAP.\n\n| Method | Backbone | #Epoch | AP | AP$\\_{50}$ |AP$\\_{75}$ |AP$\\_{S}$ |AP$\\_{M}$ |AP$\\_{L}$ |\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\n| Mask2Former | Swin-L | 100 | 50.2 | 74.8 | 54.7 | 29.2 | 53.8 | 71.1 | \n| **+Ours** | Swin-L | 100 | **51.8** | **76.0** | **56.8** | **29.9** | **55.1** | **73.3** |", " This work proposes a novel instance segmentation training paradigm that can be combined with any model. Two properties are explored: dataset-level uniqueness and transformation equivariance. This training is combined with quite a few SOTA models, and it is shown that a significant gain of 2-3 AP on COCO. Training is only 5% slower than standard training, with no speed loss at instance. Strengths:\n+ This training paradigm is novel as far as I can tell, it is properly motivated, clearly explained and both additions are orthogonal, which results in experiment section clearly show\n+ When I started reading the paper, I expected that training cost will be much higher than baseline, but upon reading the implementation strategies (namely external memory, sparse sampling and instance-balanced sampling) I was convinced that the loss in speed will be minor, and it was, it is just 5%! Kudos for the engineering aspect of the paper\n+ Experimental section is very well designed. Authors compare with a lot of SOTA approaches, they incorporate their method with 4 different representative models. Then, they clearly explain the effect of inter-scene and equivariance, they are both providing performance boost and are orthogonal, so combined they achieve biggest jump. Finally, ablation over hyperparams is very good, it shows that the newly added hyper-parameters are fairly robust (at least on COCO dataset), and it helps the reader select them on their problem.\n+ Performance gains are very good, hard to be ignored. Performance of the best result is SOTA. \n\nWeaknesses:\n- Whole paper evaluated on only one dataset. I do know COCO is standard, but for a paper that claims to improve training paradigm, I think maybe one more dataset should be used. Not to achieve SOTA necessarily, but to show benefits on top of a good model. Eg, evaluation on LVIS would greatly improve the paper, in my opinion.\n- Minor: When reading Mask2Former paper, it seems their equivalent COCO result is 50.1, not 48.5 like here. Can authors explain in more detail what is the difference in their implementation and why this happens? I asked my questions in weaknesses. Authors should focus on those in the rebuttal. Nothing to add here.", " This paper proposes a new training framework that exploits query embedding learning. Dataset-level uniqueness and transformation equivariance are introduced and demonstrate promising results on the benchmark datasets. + Framework contribution: this work brings a new paradigm shift that goes beyond inner-scene training to an inter-scene level query embedding separation.\n\n+ Introducing equivariance is intuitive and the results are convincing.\n\n- The evaluation is somewhat weak. Only COCO is adopted. In Ln 158, uniqueness and robustness are named as two crucial properties. Can these be consistent with the statement in abstract Ln 5?\n\nIf the equivariance is estimated on the feature representation, what makes it different to traditional concepts, e.g. invariance? Despite the novelty of introducing equivariance, the in-depth analysis of the mathematical properties is limited. Only empirical results cannot fully reflect the rationale behind the equivariance eqution.", " This paper proposes using two ways to improve the performance of query-based instance segmentation network. First, by using the queries to retrieve the corresponding instance from the whole training dataset, the segmenters are forced to learn instance-uniques queries. Second, by performing geometric transformations and encouraging the network predictions to be equivariant, the authors expect to learn more robust instance-query matching. Experiments show that both ways can improve the instance segmentation effectively. Pros:\n-The idea is simple and reasonable.\n\n-Overall, the paper is easy to read and understand.\n\n-Experiment results are good. \n\nCons:\n-The relationship between the proposed two contributions seems not to be clear. And the equivariance-based augmentation seems not to be tailored for instance segmentation, and even not to be tailored for query-based segmentation framework. I think it is a common augmentation trick which may not be suitable to claim it as a contribution of this paper. \n\n-What is the detailed format of cross-entropy loss and focal loss for $\\mathcal{L}_{inter\\_mask}$? See the weakness part. The authors didn't discuss the limitations and potential negative societal impact of their work. ", " - The paper proposes to enhance the dataset-level uniqueness and transformation equivariance of queries in the query-based segmentation methods.\n- The proposed method essentially adopts some technics in contrastive learning to learn discriminative queries.\n- The method consistently improves baseline methods like CondInst, SOLOv2, SOTA, and Mask2Former. ## Strengths:\n1. The paper exhibits consistent significant improvements over many methods including CondInst, SOLOv2, SOTR, and Mask2Former.\n2. The paper reveals a new direction of improvements over query-based instance segmentation methods by enhance the discrimination ability of queries.\n\n## Weaknesses:\n1. There are too many equations that makes the paper hard to follow. For example, the notion of $O^I_{n}$ in eq.5 and $\\hat{W^{g}_{n}}$ in eq. 8\n2. May also show the iteration time and GPU memory cost in Table 3 (b) and (c) to show the trade-off.\n3. K-Net [a] is not mentioned and compared in the paper, which has better speed-accuracy trade-off than SOLOv2 and CondInst.\n4. The description is not accurate and should be updated. This paper essentially improves methods that use dynamic kernels, i.e., ‘kernel-based’ methods rather than 'query-based' methods. SOLOv2 [37], CondInst[32], SOTR[39], K-Net [a], MaskFormer[40], Mask2Former [38] they essentially all use some strategy to predict content-aware kernels then use the kernels to perform convolution with the feature map to produce masks. This is explicitly described in the original paper of SOLOv2, CondInst, SOTR, and K-Net, although MaskFormer and Mask2Former describe them as ‘mask embeddings’.\n5. The augmentation techniques are not sufficiently studied, i.e., only geometric transformation is studied in Table S2 while photometric transformations and the detailed parameters like cropping range are not studied.\n[a] K-Net:Towards Unified Image Segmentation, NeurIPS 2021. 1. Why color jittering and blur is not suitable? It is quite common in contrastive learning [58,59,60] to use both geometric and photometric transformations, therefore, intuitively the reviewer does not understand why it is not suitable.\n2. Methods like MaskFormer, Mask2Former, and K-Net are not only used for instance segmentation but also panoptic segmentation methods, does the proposed method can have a more generic form that works for panoptic segmentation or it can only be used for instance discrimination? 1. As asked in the questions, if the methods only applies to instance segmentation, it might be better to claim its scope." ]
[ -1, -1, -1, -1, -1, 7, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "mUwpqmRMjEz", "F_zZIFHbDX2", "dbmNjpoutGV", "z_EgHqToxBK", "pEIfrRu8jOA", "nips_2022_q0XxMcbaZH9", "nips_2022_q0XxMcbaZH9", "nips_2022_q0XxMcbaZH9", "nips_2022_q0XxMcbaZH9" ]
nips_2022_kADW_LsENM
Video-based Human-Object Interaction Detection from Tubelet Tokens
We present a novel vision Transformer, named TUTOR, which is able to learn tubelet tokens, served as highly-abstracted spatial-temporal representations, for video-based human-object interaction (V-HOI) detection. The tubelet tokens structurize videos by agglomerating and linking semantically-related patch tokens along spatial and temporal domains, which enjoy two benefits: 1) Compactness: each token is learned by a selective attention mechanism to reduce redundant dependencies from others; 2) Expressiveness: each token is enabled to align with a semantic instance, i.e., an object or a human, thanks to agglomeration and linking. The effectiveness and efficiency of TUTOR are verified by extensive experiments. Results show our method outperforms existing works by large margins, with a relative mAP gain of $16.14\%$ on VidHOI and a 2 points gain on CAD-120 as well as a $4 \times$ speedup.
Accept
*Summary* This paper presents a novel vision Transformer TUTOR for human-object interaction detection in videos. TUTOR structurizes a video into a few tubelet tokens by agglomerating and linking semantically-related patch tokens along spatial and temporal domains. Experiments are conducted on VidHOI and CAD-120, showing that the proposed approach is more effective and efficient than previous patch token-based reference methods. *Reviews* The paper received 3 reviews, with ratings: 6 (weak accept), 5 (borderline accept) and 5 (borderline accept). All reviewers voted to accept the paper, but raised some concerns: - the claim that tubelet tokens align with semantic instances is not rigorously supported (authors added additional evidence). - an ablation study of the global context refining mechanism is required (this has been added by the authors). - an experimental comparison to 'Detecting Human-Object Relationships in Videos' [17/18] is required (authors note that code is not available, but instead provide a comparison with an alternative similar approach that reported even better performance). - clarification of the relationship between this model and Deformable DETR is required - some details of the model require clarification. *Decision* I am satisfied that the substantive concerns raised by reviewers have been addressed by the authors, and with all reviewers voting to accept the paper I also agree. I encourage the authors to carefully update the manuscript to address all the discussions below.
train
[ "65bke6qVK4o", "zv3eb37RHH7", "1wvoKxfwE8v", "xz2OjMH20nV", "OVZKHj9T7eG", "tKUEKSCtve", "DcCN3VF-u-a", "c59arUS9HU9", "N-q0gr_DFXA", "Dl2wXbbcaRJ", "Y74njvFFTJn", "QnHTu2chk8", "OTDSwLaGAkU", "dXghCT-xeZk", "reFFUNfxdH1" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for your response and appreciation. It is worth emphasizing that [17] is not specifically proposed for V-HOI detection, but for dynamic scene graph generation (DSGG), which aims to detect the object relationships in a video. Note that, DSGG consists of only several action categories but mostly spatial relationships (e.g., above, behind). In comparison, the human-object relationships are mostly described by actions in V-HOI detection. Due to the task gap, [17] did not reported the results on VidHOI and CAD-120 (for HOI detection). Besides, its source code is not available. This is why we introduced it in related work but did not compared it in table 4.\n\nFortunately, we find another related method STTran [A2], which reported better results than [17] on DSGG. The official code of STTran [A2] is available. So we compare our method with STTran [A2] on VidHOI instead.\n\nConcretely, we fine-tuned it for 50 epoches by using the same data augmentation strategy as TUTOR. The results of STTran and TUTOR on VidHOI are shown as below.\n\n| | Full | Rare | NoneRare | S | T |\n|:------:|:-----:|:-----:|:--------:|:-----:|:-----:|\n| Strran | 20.38 | 18.64 | 29.89 | 27.04 | 12.95 |\n| TUTOR | 26.92 | 23.49 | 37.12 | 32.21 | 21.28 |\n\nThe results show that the method for DSGG can be applied for V-HOI detection but achieves inferior performance. In table 4, we thoroughly compare the different methods, including the methods that specifically proposed for V-HOI detection [40, 35, 42, 4], recent image-based methods[19, 37] and their variants (using Slowfast as backbone to extract temporal information), and popular Transformer-based video analysis method [1].\n\nOverall, thank you for your valuable comments again, we hope our response can address your concerns.\n\n---\n\n[A2] Yuren+. Spatial-Temporal Transformer for Dynamic Scene Graph Generation. In ICCV, 2021.", " Thank you so much for your response and appreciation.\n\n---\n\n**Q1**\n\nThank you for you appreciation. Yes, the main difference with deformable DETR lies in the token agglomeration part. The detailed discussion will be added in the Appendix, including the difference between object detection and HOI detection, as well as the difference between TUTOR and deformable DETR.\n\n---\n\n**Q2**\n\nNo, there is no error here. As the interaction categories in VidHOI are divided into two different types: static interactions and time-related interactions. Therefore, in Table 3c, we detailed reported the ablation results of global context refining (GCR) on two subsets of VidHOI for detecting static interactions (S) and time-related interactions (T), respectively. The ablation result in our response (i.e., 26.29 w/o GCR vs. 26.92 w/ GCR) is the overall result of GCR on the whole VidHOI.\n\n---\n\n**Q3**\n\nThank you for your valuable comments. We understand your concern. To make the results substantial enough, we further reported the results of 2-layer and 4-layer Transformer decoders (the results of 6-layer are reported in table 4). The results are shown below.\n\n| Decoder | | S | | T |\n|:-------:|:-----:|:---------:|:-----:|:-------:|\n| | Patch | Instance | Patch | Tubelet |\n| MLP | --- | --- | 2.30 | 8.28 |\n| 1-layer | 9.67 | 16.42 | --- | --- |\n| 2-layer | 12.35 | 18.27 | 6.24 | 14.85 |\n| 4-layer | 19.40 | 25.96 | 10.32 | 18.67 |\n| 6-layer | 27.12 | 32.21 | 13.29 | 21.28 |\n\nHere, the patch tokens are generated by HOTR [19] with SlowFast as backbone. Two conclusions can be drew from the results: 1) The decoder is important as a stronger decoder can achieve better results, both for patch and instance/tubelet tokens; 2) The performance gain between instance/tubelet tokens and patch tokens becomes more evident as the decoder becomes simpler, which verifies the effectiveness of token abstraction and token linking in capturing spatial and temporal information. These results will be added in the Appendix.\n\n---\n\n**Q4**\n\nAs a simple validation, we count all frames used for drawing Figure 3c and calculated the frequency of the co-occurrence by using the ground-truth labels. We reported the percentage of frames where two interaction co-occur over all frames.\n\n| | watch | hold | speak-to | ride | feed | push | pull | lean-on | lift | wave-hand |\n|:---------:|:-----:|:----:|:--------:|:----:|:----:|:----:|:----:|:-------:|:----:|:---------:|\n| watch | 76 | 27 | 59 | 4 | **68** | 24 | 17 | 7 | 12 | **72** |\n| hold | 27 | 72 | 0 | 6 | 0 | 52 | 14 | 0 | 69 | 0 |\n| speak-to | 59 | 0 | 68 | 0 | 17 | 0 | 0 | 0 | 0 | 42 |\n| ride | 4 | 6 | 0 | 81 | 0 | 0 | 0 | 0 | 0 | 0 |\n| feed | 68 | 0 | 17 | 0 | 71 | 0 | 0 | 0 | 0 | 2 |\n| push | 24 | 52 | 0 | 0 | 0 | 63 | 0 | 0 | 31 | 0 |\n| pull | 17 | 14 | 0 | 0 | 0 | 0 | 61 | 0 | 28 | 0 |\n| lean-on | 7 | 0 | 0 | 0 | 0 | 0 | 0 | 64 | 0 | 0 |\n| lift | 12 | 69 | 0 | 0 | 0 | 31 | 28 | 0 | 61 | 0 |\n| wave-hand | 72 | 0 | 42 | 0 | 0 | 2 | 0 | 0 | 0 | 79 |\n\n As the results shown, the attention scores in Figure 3c and the frequency of the co-occurrence are mostly consistent.\n\n---\n\n**Revision**\n\nWe highlighted the revised parts in the revision and all of aforementioned discussion were added in appendix.\n\nThank you again for your detailed comments. Your valuable comments help us to make our paper stronger.", " The response addressed some of my concerns such as token linking. But the remaining issues such as performance comparison remain unresolved. Overall, I stick to my rating of borderline accept.", " I appreciate the authors' detailed feedback.\nIt indeed helps me better understand the contribution of the paper.\n\n---\n**Q1**\n> Object detection demands location-sensitive features to precisely regress the bounding boxes while the final goal of HOI detection is to classify the interaction categories, which requires location-invariant features within the union region of a pair of interacted human and object.\n\nI think this is a very good argument that helps readers better understand the different characteristics of object detection and HOI detection as well as the rationality of the design of the proposed method. I recommend adding this kind of description in the manuscript.\n\nIn addition, I understood the main difference with deformable DETR lies in token agglomeration part. If that is the case, I think it is better to clarify it in the manuscript.\n\n---\n**Q2**\n\nThank you but I could not find the following in Table 3c.\n> (26.29 w/o GCR vs. 26.92 w/ GCR)\n\nIs there any error?\n\n---\n**Q3**\n\nI understood the authors' intention, but I still have a concern on the experimental design.\n> Therefore, we tried to use a very simple decoder, since an over-powerful decoder would make it hard to verify whether the performance gain comes from better feature extraction or powerful decoding capabilities.\n\nI think this implies the effectiveness of the proposed components are not substantial enough that the effect diminishes if they are used with powerful decoder.\nSince the proposed method relies on the powerful decoder of 6-layers as mentioned in Section 3.5, I think the effectiveness of each component should be discussed in this setting.\n\n---\n**Q4**\n\nI am still not sure if the co-occurrence of \"watch\" and \"wave hand\" is reasonable or not. Currently the rationality of the similarity matrix is claimed only on the basis of the subjective experience, i.e., \n> they are likely to co-occur according to our experience\n\nI wonder if it is possible to show more objective evidence such as the frequency of the co-occurrence calculated by the ground-truth labels.\n\n---\n**Q5-11**\n\nThank you for the feedback, I understood them.\nI think the explanation in the response to Q10 and Q11 contain good insights to the readers.\nTherefore I recommend adding these descriptions in the manuscript or supplementary material.\n\n---\n**Overall**\n\nSince many of my concerns are addressed, I increased my rating accordingly.\nIf the authors will provide further revision, I would appreciate if the revised parts are highlighted, which helps the reviewers easily find out the revised parts.", " **Q4: Question about Figure 3c, and why is \"watch'' and \"feed'' picked up, but not the \"watch'' and \"wave hand''?**\n\nIn **G2**, we described the detailed process of generating the similarity matrix and explained the conclusion we draw from Figure 3c. To sum up, Transformer can mine the priors of these co-occurring interactions, i.e., interactions that are more likely to co-occur have higher attention score, which can improve the performance of HOI detection since it is a multi-label classification task.\n\nWe just took \"watch'' and \"feed'' as the examples, since 1) they are likely to co-occur according to our experience and 2) the attention score between them is high. In a similar manner, we can also choose the \"watch\" and \"wave hand\" as the examples. The consistency between co-occurring and high attention scores evidences TUTOR can mine the priors of these co-occurring interactions.\n\nIn addition to the above conclusion, there is another purpose for showing Figure 3c. Cross-attention has attracted lots of interests and has been widely explored while self-attention in decoder was under-explored in Transformer-based HOI detection methods. We verified that self-attention in the decoder can capture the priors of co-occurring interactions and these priors are important for HOI detection. Therefore, our experiment may help the community to explore more potential of self-attention in the decoder, e.g., some well-designed HOI queries (randomly initialized for now) that allow self-attention to learn more priors.\n\n---\n\n**Q5: What is the concrete process of \"ViT-like\" variant?**\n\nViT-like is illustrated as Figure 1b, i.e., splitting an image into a fixed number of patches. On this basis, an additional token fusion layer is added at the end of each ViT layer in the ViT-like variant. The fusion layer fuses every 4 neighboring patch tokens by taking their average value as the output (Line 262-263), which is similar to the average pooling in CNN and can reduce the spatial redundancy to some extent. \n\n---\n\n**Q6: Does Gumbel-softmax variant ensure one-to-one assignment?**\n\nNo, it does not ensure one-to-one assignment. As explained in Line 275-278, when directly using the value of Gumbel-softmax as assignment weights (replacing the $\\hat{\\mathbf{A}}$ in Eq.11 with $\\mathbf{A}$), all key tokens in a frame are weighted added into a query token.\n\n---\n\n**Q7: What does \"all integral locations\" in Line 113-114 mean?**\n\nEquation 3 follows the standard process of bilinear interpolation. Specifically, the predicted offsets are usually fractional, which leads to a fractional sampling location that can not be directly sampled from regular grids on feature maps. In this case, its value is calculated by the weighted sum of its neighbouring 4 pixels with integral locations. For example, if the target location is [3.5, 4.1], the locations of neighbouring pixels would be [3,4] (top-left), [4,4] (top-right), [3,5] (bottom-left) and [4,5] (bottom-right).\n\n---\n\n**Q8: typos and confusing details**\n\nThank you. We have carefully revised our manuscript and re-uploaded a revision. Specifically, we rewrite equation 1 and use $[w_x^i, w_y^i]$ to denote the location of the top-left point of the $i$-th window. If we count windows row-by-row, $w_x^i = (i \\cdot S_w) $% $W$ and $w_y^i = \\lfloor\\frac{i \\cdot S_w}{W} \\rfloor \\cdot S_w$, where % is remainder operation. We replace $N$ in Line 144 with $J$, which indicates the number of instance tokens in each frame after token abstraction, and $j$ is the index of instance token. $q$ in Line 150 is revised as $r$ and $\\text{global}^{\\dagger}$ in the caption of table 1 is revised as $\\text{ViT-like}^{\\dagger}$. We have carefully double-checked all equations to avoid confusion. \n\nWe did use ``vspace\" to adjust the spacing between some paragraphs in our reviewed version, but the margin did not changed. In revision, we have removed them all. Besides, Figure 3 and Table 3c have been enlarged for clarity in the revision. In checklist 4b, VidHOI and CAD-120 are both public databases, but we did not find specially mentioned license. This is why we answered this question as N/A. We have revised it as No in revision.\n\n---\n\n**Q9: How the accuracies of other methods in Table 4 are obtained?**\n\nThe results of ST-HOI [4] are quoted from [4]. For other methods, we use their official codes to re-trained their models on the two datasets, vidHOI and CAD-120, except for the results of [35, 42] on CAD-120, which are quoted from [35, 42]. The training strategies and data augmentation are kept as the same as TUTOR.", " **Q10: Using Hungarian algorithm to find one-to-one matching in token linking.**\n\nYes, Hungarian algorithm can be used to find one-to-one matching, which has been tried in our early experiments. It achieved a slightly lower result than nms-one-hot (26.04 vs. 26.92 on Full set in VidHOI). We conjecture that the reason is as follow. For the best case, i.e., not more than one key token in each frame is assigned to the same query token, the results of nms-one-hot and the Hungarian algorithm are the same. For other cases, nms-one-hot prioritizes matching key token with higher similarity to the corresponding query token while Hungarian algorithm aims at maximizing the sum of global similarities after matching. However, in a video clip, humans and some salient objects are the protagonists of the scene while background objects are not of our interests. For example, in Figure 1, the basketball player 1 (in blue short), the player 2 (in white short) and the basketball are the protagonists while the background people have little effects to detect the HOIs (hold/play/shoot basketball). These protagonists have more distinguishable features, so they are more likely to have a higher similarity across different frames. Thus, nms-one-hot can correctly and preferentially match these protagonists, but the Hungarian algorithm may fail to do so, and may even match the player 1 to some background humans in different frames to get the global maximum. Overall, nms-one-hot explicitly matches the protagonists first, while the Hungarian algorithm treats all objects equally.\n\n---\n\n**Q11: One-to-one matching may not be the optimal solution when an object disappear in some frames.**\n\nAs aforementioned in **G1**, we have used two different strategies to alleviate the errors that introduced by one-to-one matching. However, the optimal solution should rely on a thorough and completely correct understanding of each frame, which we have to admit is a quite difficult task. We will keep working on this issue in the future.\n", " Thanks for your valuable comments and your appreciation for our technical contributions. Below we discuss the concerns you have raised in detail.\n\n**Q1: The relation to deformable DETR.**\n\nDeformable DETR [52] indeed inspired TUTOR and we give more credit to [52] and [A1] in the revised version. However, TUTOR and deformable DETR are designed for different tasks with different objectives, which leads to a significant difference between TUTOR and deformable DETR. Specifically, deformable DETR maintains a fixed number of tokens to capture fine-grained features. Although it calculates attention locally, there is no explicit module to agglomerate a group of related tokens. In comparison, TUTOR involves a clustering process, where related tokens are grouped into an irregular window and then updated and agglomerated. These two different designs are proposed to meet different task requirements. Object detection demands location-sensitive features to precisely regress the bounding boxes while the final goal of HOI detection is to classify the interaction categories, which requires location-invariant features within the union region of a pair of interacted human and object. Consequently, deformable DETR is suitable for object detection but might not for HOI detection. Without an explicit clustering mechanism, deformable DETR cannot capture instance-level tokens, which prevents high-level understanding of interactions.\n\nIn our early experiments, we have tried to replace the token abstraction module in TUTOR with deformable DETR, but the performance is relatively poor (more than 10% degradation compared to TUTOR). Overall, TUTOR is more suitable for high-level scene understanding tasks, such as HOI, with the ability of extracting instance tokens, while deformable DETR shows a better ability in extracting fine-grained features. \n\nIt is worth mentioning that few Transformer-based works discussed the problem of dynamically clustering, especially in the HOI community. Therefore, we hope that TUTOR can introduce some new ideas to the community.\n\n---\n\n**Q2: Ablation study on global context refining.**\n\nGlobal context refining (GCR) achieves a performance gain of 0.63 on the Full set of VidHOI (26.29 w/o GCR vs. 26.92 w/ GCR). The detailed results are added in Table 3c in the revision.\n\n---\n\n**Q3: The reason why different decoders are used in Section 4.5.**\n\nAs explained in line 289-291, we conducted this experiment to verify the effectiveness of instance tokens (token abstraction) and tubelet tokens (token linking) in capturing spatial and spatiotemporal information, respectively. Therefore, we tried to use a very simple decoder, since an over-powerful decoder would make it hard to verify whether the performance gain comes from better feature extraction or powerful decoding capabilities. \n\nMore specifically, since 1-layer is the simplest architecture of a decoder, we used it to test the performance of instance tokens on static HOI detection. Besides, for time-related HOI detection, we further removed the Transformer decoder and directly used a global average pooling layer followed by a 4-layer MLP, i.e., 4 fully-connected layers (not 4-layer decoder that consists of self- and cross-attention), to detect the time-related interactions. As shown in Table 2a, as the decoder is fairly simple, the performance gain between instance/tubelet tokens and patch tokens become more evident, which verifies the effectiveness of token abstraction and token linking in capturing spatial and temporal information. \n", " Thanks for your valuable comments and your appreciation for our technical contributions. Below we discuss the concerns you have raised in detail.\n\n---\n\n**Q1: Problem about missing instances in some frames.**\n\nWe answer this question in **G1**. As aforementioned, TUTOR uses two simple but effective strategies to address this problem, including splitting a long video into several short ones, and summing the matched tokens by using similarities as weights, instead of concatenating them directly. But we agree that it is still not the optimal solution and it deserves more exploration to completely eliminate the redundancy, especially on the temporal domain.\n\n----\n\n**Q2: [17] should also be compared in Table 5 and 6.**\n\nWe show our respect to [17] and have briefly introduced it in related work (Line 63-66). However, its source code is not available. Our reproduced code does not guarantee a fair comparison, especially the speed of inference.\n\n----\n\n**Q3: Revise the manuscript.**\n\nThank you. We have carefully double-checked our manuscript and revised it. The revision is re-uploaded.\n\n\n", " **Q3: Global context refining may not discriminate against co-occurring interactions.**\n\nGlobal context refining is **NOT** used for discriminating against co-occurring interactions, but for detecting them simultaneously. Note that, HOI detection is a multi-label classification task, i.e., a human/object may have multiple interaction labels. Taking the example of \"holding-fork'' and \"eating-cake'', although they describe almost the same action pattern, \"holding-fork'' is prone to be detected but \"eating-cake'' might be missed, if the information of \"cake'' is not captured. In this case, global context refining can exchange the information between \"hold-fork'' and \"cake'', which leads to the label of ``eating''. Namely, with global attention refining, the detection of one interaction instances can help to detect another one. In previous literature, global context refining was shown to be helpful for detecting co-occurring interactions[20, 37].\n\n[20] Kim+. Hotr: End-to-end human-object interaction detection with transformers. In CVPR, 2021.\n\n[37] Tamura+. Qpic: Query-based pairwise human-object interaction detection with image-wide contextual information. In CVPR, 2021.\n\n---\n\n**Q4: The reference methods ST-HOI and STIGPN should be briefly introduced**\n\nWe have briefly introduced ST-HOI [4] in Line 58-61 and STIGPN [42] in line 55-56 in the related work and analyzed their differences with TUTOR. The comparison to these two reference methods were reported in Table 4.\n\n---\n\n**Q5: More detailed description to the experiment \"spatial distance''**\n\nAs the datasets we used provide human/object annotations on each frame, we can compute the spatial distance, i.e., the Euclidean distance, between the center of each human and the center of the interacted object frame-by-frame.\n\n**Q6: How to generate the similarity matrix in Figure 3c? Why \"watch'' and \"feed'' are likely to co-occur?**\n\nWe describe the detailed process of generating the similarity matrix in **G2**. We think it is easy to understand \"watch'' and \"feed'' are likely co-occur. Humans are normally watching the animal/baby they are feeding. As a simple validation, we count 8,000 frames with label of \"feed'', and find more than 60% of them also has label of \"watch''.\n", " Thanks for your valuable comments and your appreciation for our technical contributions. Below we discuss the concerns you have raised in detail.\n\n**Q1: Validation of that the tubelet tokens are aligned with semantic instances and that the IWP can extract instance-level tokens.**\n\nWe have given some results for this validation in the main body of our manuscript and the supplementary material.\n\nIn the supplementary material, we visualized the sampling locations of instance tokens and tubelet tokens in Figure 2 and Figure 3, respectively. Qualitatively, the envelope lines of the sampling locations are roughly aligned with semantic instances. In the main body of our manuscript, we compared HOI detection results by performing a simple decoder on instance/tubelet tokens and tokens obtained by regular window partition, respectively, in Table 2a. The large improvement brought by instance/tubelet tokens quantitatively implies that instance/tubelet tokens are roughly aligned with semantic instances. Besides, during the rebuttal period, we conduct an extra experiment to explicitly measure how precise the instance/tubelet tokens are aligned with semantic instances. We measure this by computing the IoU between each instance token and its corresponding ground truth human/object bounding box. For comparison, we also compute this IoU metric for patch tokens. To this end, we first match each patch token to a human/object instance by performing classification on patch tokens. Then we compute the IoU between each group of patch tokens matched to the same human/object instance and the ground truth human/object bounding box. We report the mean IoU over the entire testing set, where the instance tokens and the patch tokens achieve 65.30\\% and 17.80\\%, respectively. This comparison result shows the strong ability of instance/tubelet tokens to align with semantic instances.\n\n\nAlthough IWP cannot guarantee that every generated token is exactly aligned with an instance, the above qualitative and quantitative results show that IWP has a great ability to extract instance-level tokens, compared with patch tokens and tokens obtained by regular window partition. This ability comes from two key designs: 1) IWP is performed on progressively downsampled feature maps, which give a sufficiently large receptive field; 2) The stacked convolution and attentions layers have provided enough contextual information (rich features) for IWP to learn the right offsets to align with instances. \n\n---\n\n**Q2: Does nms-one-hot introduce biases when the order of query tokens changes?**\n\nThe order of query tokens does not influence the nms-one-hot operation. Since nms-one-hot operation is performed on the similarity matrix $\\mathbf{A}$ (Eq. 9), we would like to give a brief description of $\\mathbf{A}$ first.\n\n$\\mathbf{A}$ is a three-dimensional matrix where the first dimension is index of frame, the second and third dimensions are index of query and key tokens, respectively. Taking $t$-th frame as example, since the second dimension is the index of query token, changing the order of the query tokens is actually swapping the rows in $\\mathbf{A}_t$. However, the nms-one-hot operation is looking for the maximum value along the column. Therefore, the whole process is not affected by the order of query tokens.\n\nWe further verify this conclusion through an experiment. Concretely, we manually and randomly change the order of the query tokens for 3 times, and we observed no difference in the result.\n", " We thank all reviews for the constructive comments. We upload a revision with a few modifications, detailed as follows:\n- Double-check the manuscript and revise some typos.\n- Redefine some symbols to make them more clear.\n- Add an ablation study about global context refining.\n- Increase the spacing between paragraphs and enlarge Figure 2 and Table 3c to make them more clear.\n- Move the detailed process of calculating loss function into supplementary material.\n\nTo avoid confusion, the line numbers, equation numbers, and reference numbers used below are consistent with the reviewed version, not the re-upload revision. \n\nWe hope our response addresses your concerns and we would be glad to discuss any further questions.\n", " We thank all the reviewers for their valuable comments and insightful advice. Below we address several common concerns.\n\n**G1: An instance does not always appear in all frames, how does this affect the token linking? [R2, R3]**\n\nIn practical scenarios, it is true that an instance does not always appear in all frames, especially when the video is long. To investigate the influence of this issue, we conducted the experiment in Table 1(c) and analyzed the results in Line 281-287. Specifically, when a video is short, the assumption that an instance appears in all frames does not introduce much noise, but this might not be true for long videos. As reported in Line 283, we found that when a video is longer than 16 seconds, the performance of nms-one-hot would be significantly degraded. We addressed this problem by splitting a long video into short clips with the same length and performing nms-one-hot assignment in each short clip. This strategy can alleviate this problem to some extent. Besides, in Equation 11, instead of directly concatenating the matched tokens, we summed them using the similarity $\\mathbf{A}$ as the weights. In this way, if a counterfeit key token matches a query token, its similarity would be relatively small, which can reduce the effect of noise. \n\nIt is our belief that reducing redundancy in token representation is crucial for vision Transformers. Compared with patch tokens, the proposal of tubelet tokens has made considerable progress, especially for eliminating redundancy in the spatial domain. To further eliminate the redundancy in the temporal domain, more fine-gained perception of instances in each frame is required. A straightforward idea is to perform an additional object detector on each frame, and if an instance is not detected, we can use a [mask] token to indicate it. However, it will make the entire model much more complex and introduce additional computational costs. Moreover, the object detector may also have some errors. In our future work, we will keep exploring how to more effectively eliminate the redundancy of the tubelet tokens in the temporal domain.\n\n---\n\n**G2: A more detailed explanation for Figure 3c. [R1, R3]**\n\nTo systematically answer this question, in the following three paragraphs, we elaborate **what Figure 3c illustrates**, **how to produce Figure 3c**, and **what conclusion we can draw from Figure 3c**, respectively.\n\nFigure 3c illustrates the attention weight map computed by the self-attention module in the last decoder layer of TUTOR. It shows that Transformer is able to model the pair-wise relations between different HOI instances, i.e., the priors of co-occurring interactions. Specifically, each decoder layer consists of a self-attention module and a cross-attention module, where the cross-attention module globally reasons about all HOI instances by using the features computed by the encoder of TUTOR as the context. In comparison, self-attention calculates pairwise attention weights between different HOI instances, which does not involve the visual features from the encoder of TUTOR.\n\nTo produce Figure 3c and make the results convincing, we picked 5 static interactions and 5 dynamic interactions, each of which has enough samples (over 10,000 frames). Then, for each input sample, we generated an attention map $ \\mathbf{A}\\_{\\text{self}}$ with size of $N_q \\times N_q$ by the self-attention module in the last decoder layer of TUTOR, as mentioned above, where $N_q=100$ is the number of HOI queries we manually defined. Then, if two different interactions (e.g., the $i$-th and $j$-th interaction) are simultaneously predicted for an input sample, we recorded the attention weight $\\mathbf{A}_{\\text{self}}^{(i,j)}$. Finally, the scores shown in Figure 3c are the average attention weights over all samples. \n\nAs HOI detection is a multi-label classification task, i.e., a human/object may have multiple interaction labels, it was found that the performance of HOI detection can be improved by mining the priors of these co-occurring interactions [8,15,20]. Figure 3c evidences that Transformer can mine such priors, i.e., interactions that are more likely to co-occur have higher attention weights. This shows the superiority of Transformer in HOI detection.", " This paper proposes a video-based HOI detection method that formulates the spatio-temporal representation in a video clip as tubelet tokens, in which the tubelet tokens are extracted based on the window-based multi-head self-attention with learnable offsets to aggregate semantically-related patch tokens in the spatial domain, and then temporal token linking to aggregate one-to-one matched tokens from the other frames. The tubelet tokens are then refined with global contexts and used as the video embeddings for querying HOI instances by a well-known transformer decoder, similarly to QPIC. \n\nThe tubelet tokens are better than patch tokens due to their compactness in representing spatio-temporal patterns and are more likely to align with visual dynamics from semantic instances. These two benefits are important in generating informative video embeddings for transformer-based HOI detection. Therefore, extensive experiments show that a video-based HOI detection model with tubelet tokens becomes more effective and efficient than previous patch token-based reference methods. --- Strengths ---\n\n1. Video embeddings as tokens that represent spatio-temporal tubelets are beneficial for various action-related tasks, such as HOI detection. This paper proposes a nice and practical attempt toward this goal and is a good reference for readers in related research fields.\n2. This paper is generally well-written with clear organization and comprehensive experiments.\n3. The effectiveness and efficiency of the proposed method have been proven, with significant performance gains and runtime speedup to existing methods. \n\n--- Weaknesses ---\n\n1. This paper claims that the tubelet tokens are aligned with semantic instances, but the experiments did not validate this contribution. In fact, the IWP operation is not guaranteed to extract instance-level tokens.\n\n2. The technical details should be clarified:\n- Token linking: The nms-one-hot operation seems to introduce biases since different orders of query tokens will result in the different linkage of key tokens. I am not sure whether such biases affect the prediction or not.\n- Global context refining: Knowing the global contexts should be useful since contexts may help discriminate fine-grained interactions. But enhancing global contexts may not discriminate against co-occurring interactions because these interactions may be different labels describing the same spatio-temporal action patterns.\n\n3. Experiments: \n- The analysis of CNN-based & Transformer-based methods: \n - The reference methods ST-HOI and STGPN should be briefly introduced and compared with the proposed method. \n - How the experiment ``spatial distance'' is conducted is not quite sure. For example, does the spatial distance means the distance between the human and the object? How to handle the case that the spatial distance is temporally varying?\n - How to generate the similarity matrix in Fig.3 (c)? Line 252-256 briefly introduces the procedure, but it is still not quite straightforward. Moreover, why 'watch' and 'feed' are likely to co-occur?\n Overall, this is a good paper with a sound novelty. I am almost satisfied with this manuscript, but it is encouraging if the authors can answer the questions listed in the paper's weaknesses. Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work.", " This paper presents a novel vision Transformer TUTOR for human-object interaction detection in videos. TUTOR structurizes a video into a few tubelet tokens by agglomerating and linking semantically-related patch tokens along spatial and temporal domains. Experiments are conducted on VidHOI and CAD-120. Strengths:\n1. Ablation studies are conducted to evaluate the effectiveness of each component.\n2. The proposed method achieves a new state-of-the-art performance.\n\nWeaknesses:\n1. In the paper, token linking aims at generating complete tubelets whose length is equal to the duration of the video. However, in real scenarios, the instances don’t necessarily appear in the entire video and the tubelets are thus broken into fragments. In these cases, generating a full-length tubelet is unreliable and may introduce much noise.\n2. Some closely related works, such as [17], should also be compared in Table 5 and 6.\n3. The manuscript is not well written and needs to be revised carefully.\n See above See above", " This paper proposes a new method for obtaining efficient token representations in transformer-based HOI detection in videos. The key idea is to spatially aggregate feature tokens using irregular window partition in order to deal with various shapes of objects (token abstraction) and to temporally link the tokens based on the similarity (token linking). The proposed method is evaluated on benchmark datasets and achieved favorable performance. ## Strength\n1. The proposed framework is overall reasonable and sound. Especially the design of token linking module is new. I think the proposed method is a reasonable extension of transformer-based HOI detection in videos.\n1. The proposed method achieved favorable results on standard benchmark datasets. \n1. The effectiveness of the proposed components are verified at least to some extent.\n\n## Weakness\n1. The proposed method is not discussed well under the context of very similar work that should have inspired the proposed method. IWP in token abstraction module is very similar to deformable DETR [52]. The appropriate credit should be given to [52] and maybe also to [A1], and the proposed method should be discussed in relation to these works. In my view, token abstraction module is not a innovation of the present paper, rather, it is adaptation of the existing concept.\n\n1. The experiment is not very satisfactory. \n 1. There is no ablation study on global context refining.\n 1. Section 4.5: The reason why different decoders (1-layer, 4-layer) are used is not explained. How is the performance differences if the normal 6-layer decoder is used?\n 1. Figure 3c: I find the explanation of this figure is not convincing enough. In L256 the example of watch and feed is picked up, but watch is more strongly related to wave hand, which I think is strange. Overall, I find difficulty in drawing any solid conclusion from this similarity matrix.\n\n1. Some of the important details are not clear and confusing. \n 1. In table 1a, what is the concrete process of “ViT-like” variant?\n 1. In table 1b, does the “Gumbel-softmax” variant ensure one-to-one assignment? \n 1. In equation 1, “[(i-1)S_w, (i-1)S_w]”, does this mean the regular windows are placed only diagonally? Isn’t it something like “[(i-1)S_w, (j-1)S_w]”? \n 1. In L102, “i-th regular window” is this “i” different from “i” in L103 ([i,j], C_i)? If so, this is very confusing.\n 1. L113-114, “q enumerates all integral locations”. What does “all integral locations” mean? How are they calculated?\n 1. L144: Is “N” here equal to “N” in L109?\n 1. L145: I recommend not using index “i” for here and there to avoid confusion.\n 1. L150: Is “q” here mistake for “r”?\n 1. The caption of Table 1: What is “global$^{\\dagger}$”?\n 1. How the accuracies of other methods in Table 4 are obtained?\n\n1. I’m afraid the paper, not significantly but slightly, violates the formatting instruction of NeurIPS. The margin seems too small and the characters in figures are invisible without zooming in (especially Fig. 3 and Tab.1c). I’m afraid this is not fair. \n\n1. Typos and editorial errors.\n 1. L163 “the the”. \n 1. L284 “frame. when…”. \n 1. L292 “respectively. \n 1. the mAP…”. \n 1. Table 4: wrong citation for QPIC. \n 1. [52] ICLR 2021, not 2020. \n 1. Checklist 4b: Since existing assets are used (according to the answer to 4a), the answer to this question should be either yes or no. Actually it should be “No” because I found no license statement.\n\n[A1] Dai+, Deformable Convolutional Networks, ICCV 2017\n 1. I think one reasonable way to find one-to-one matching in token linking module is to use Hungarian algorithm. Is it possible to use it, and if so, how does it compare to the proposed scheme for linking tokes?\n1. In the token linking module, it is implicitly assumed that a token in a frame always match to another token in another frame. However, I think this is not always the case because an object may be occluded after some frames or an object may appear at some frame. In this view, I do not think it is optimal to try to find one-to-one matching between frames. I would like to know the authors’ opinion on this point.\n The limitations and potential negative impact are discussed in Section 5." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "1wvoKxfwE8v", "xz2OjMH20nV", "c59arUS9HU9", "tKUEKSCtve", "DcCN3VF-u-a", "DcCN3VF-u-a", "reFFUNfxdH1", "dXghCT-xeZk", "Dl2wXbbcaRJ", "OTDSwLaGAkU", "nips_2022_kADW_LsENM", "nips_2022_kADW_LsENM", "nips_2022_kADW_LsENM", "nips_2022_kADW_LsENM", "nips_2022_kADW_LsENM" ]
nips_2022_r70ZpWKiCW
Semi-Supervised Semantic Segmentation via Gentle Teaching Assistant
Semi-Supervised Semantic Segmentation aims at training the segmentation model with limited labeled data and a large amount of unlabeled data. To effectively leverage the unlabeled data, pseudo labeling, along with the teacher-student framework, is widely adopted in semi-supervised semantic segmentation. Though proved to be effective, this paradigm suffers from incorrect pseudo labels which inevitably exist and are taken as auxiliary training data. To alleviate the negative impact of incorrect pseudo labels, we delve into the current Semi-Supervised Semantic Segmentation frameworks. We argue that the unlabeled data with pseudo labels can facilitate the learning of representative features in the feature extractor, but it is unreliable to supervise the mask predictor. Motivated by this consideration, we propose a novel framework, Gentle Teaching Assistant (GTA-Seg) to disentangle the effects of pseudo labels on feature extractor and mask predictor of the student model. Specifically, in addition to the original teacher-student framework, our method introduces a teaching assistant network which directly learns from pseudo labels generated by the teacher network. The gentle teaching assistant (GTA) is coined gentle since it only transfers the beneficial feature representation knowledge in the feature extractor to the student model in an Exponential Moving Average (EMA) manner, protecting the student model from the negative influences caused by unreliable pseudo labels in the mask predictor. The student model is also supervised by reliable labeled data to train an accurate mask predictor, further facilitating feature representation. Extensive experiment results on benchmark datasets validate that our method shows competitive performance against previous methods. We promise to release our code towards reproducibility.
Accept
The paper was reviewed by four expert reviewers in the field. The initial ratings were three weak accept and one weak reject. In the response to reviewer sFdH (who gave Weak reject), the authors clarify all the questions from the reviewer, including using labeled data in GTA, the data split, training details, advantages of re-weighting mechanism. While the reviewer sFdH did not acknowledge the rebuttal, the AC believes that these questions have been sufficiently addressed. Given the novel approach, extensive quantitative evaluation, and clear writing, the AC agrees with the three reviewers and recommends to accept.
val
[ "nkwXMg_K4ZB", "qb2ayODV49n", "jHSzRXzr59r", "XYA7QGmDJnm", "FUsAWIYIUv", "3YMdH6ezk_x", "ZfPY2YnGMzi", "IZd1J17Wc03", "1bc8XjJNFiZ", "qAMqtd9H4l" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all our reviewers for your distinguished efforts and insightful comments. We have answered your questions in our responses, hoping that we can address your concerns. \n\n\nWe have also uploaded the revised paper and supplementary materials (the modifications are present in blue color). Here, we summarize our main modifications. \n\n\n__1. Method design analysis:__ We add the analysis of our model design in Sec 4.4 in our revised paper, where we compare other different method designs with our proposed one and demonstrate that our method shows the strongest performance. We also try to modify Sec 3 to make it more concise and clear.\n\n__2. Clarification of experiment settings:__ We have modified notations in Table 1-3 to avoid them from being confusing. \n\n__3. Limitation analysis:__ The analysis of limitations in our original paper may cause some misunderstandings. We have analyzed the limitation from other perspectives. \n\n__4. Clarification of implementation details:__ We have added more implementation details of our method and experiment in our revised Appendix A.1, including hyper-parameters and random seeds. We also conduct a hyper-parameter sensitivity analysis in Appendix A.2.3. \n\n__5. More visualization results:__ We present more visualization results in revised Appendix A.3.\n\n__6. Typos and missed related works:__ We have fixed the typos and added the works our reviewers point out. \n\n\nFinally, sincere gratitude to our reviewers. We are looking forward to having further discussions with you. \n", " Sincere gratitude to you for the recognition of our idea and detailed suggestions. We hope our answers can address all of your concerns. We are looking forward to having further discussions with you. \n\n__Q1: What is the \\tau value in Eq. (6)?__\t\n\n\nThe hyper-parameter $\\tau$ is 1.0 in all of the experiments.\n\n\n__Q2: How do the authors select the threshold for pseudo-labeling, i.e., is \\gamma a constant or varied and how?__\n\n\nWe select the top 80% of the data, in other words, we abandon the 20% data with lower confidence than others. $\\gamma$ is decided by this criterion so it varies during the training procedure. \n\n\n  __Q3: Table 4-7 are conducted on a smaller labeled set (183 images). However, I wonder how Table 4-7 would look like when there are more labeled data, e.g., >1000/2000 labeled images ...__\n\n\nWe have conducted the ablation experiments with 1464 labeled images. The results are shown below. \n\nThe ablation study on the components of our method.\n\n| Teacher-Student | Gentle Teaching Assistant | Re-weighted | mIoU |\n|:---:|:---:|:---:|---|\n| &#10004; | x | x | 70.96 |\n| &#10004; | &#10004; | x | 78.95 |\n| &#10004; | &#10004; | &#10004; | __80.47__ |\n\nComparison of knowledge transmission mechanisms.\n\n| Method | mIoU |\n|-----|----|\n| SupOnly | 72.50 | \n| Original EMA | 74.10 |\n| Unbiased ST | 74.16 |\n| Ours | __78.95__ | \n\nResults of three models.\n\n| Method | mIoU |\n|-----|----|\n| Gentle Teaching Assistant | 76.89 |\n| Student Model | 78.52 |\n| Teacher Model | __78.95__ | \n\nAblation study of our re-weighting strategy.\n\n| Confidence-based Re-weighting | Laplace Smoothing| mIoU |\n|:---:|:---:|---|\n| x | x | 78.95| \n| &#10004; | x | 79.74|\n| &#10004; | &#10004; | __80.47__ |\n\n\n __Q4: In Table 6, the teacher model performs better than the student model. It's interesting to know whether it is the same case for all experimental settings. If so, what is the intuition behind it?__ \n\n\nActually, it is the same case in all of our experiments, the teacher model shows slightly stronger performance than the student model. We conjecture that with EMA parameter updates, the teacher model can be optimized more smoothly than the student model and can have a better performance. Fortunately, the stronger teacher model, which generates pseudo labels, can push the performance of our GTA and student model to a higher level by the fly-wheel effect. \n\n\n__Q5: Missing references.__\n\n\nWe have added these related works in our revision. \n\n\n__Q6: Questions about limitation.__\n\nThank you for pointing out this. We express sincere apology that our statement is somewhat confusing. We have analyzed the limitation of our method from other perspectives in our revision. \n", " __Q4: The authors suggest that training the student with faulty pseudo labels would hinder results. How about training GTA supervised and the Student semi-supervised ...__\n\n\nThank you for providing this important comparison experiment. The results are shown below. We can observe that training GTA supervised and student semi-supervised shows significantly lower performance than our design. It is consistent with our intuition that the student model is not suitable for learning from the noisy pseudo labels directly. We have added these discussions to our revised main paper (Sec 4.4, Table 7). \n\n\n| GTA | Student | mIoU | \n| :---- | :---- | :----: |\n| Labeled Data | Pseudo Labels | 66.71 |\n| Pseudo Labels | Labeled Data | __73.16__ |\n\n\n__Q5: Do all NNs (GTA, Teacher, Student) have the same type of feature extractor ...__\n\nAll NNs have the same architecture, feature extractor (ResNet-101 pre-trained on ImageNet), and mask predictor (DeepLabv3+). All of the methods in Table 1, 2, 3 take this setting. \n\n\n__Q6: I do not see a clear advantage of applying the re-weighting mechanism ...__\n\n\nComparison experiment in our original paper is conducted with the situation that there are only 183 labeled data. Under this situation, the lack of labeled data leads to large confidence divergence among pixels, even after confidence filtering. As a result, traditional confidence re-weighting brings about over-penalization. To tackle this phenomenon, we adopt Laplace Smoothing when designing our re-weighting mechanism. The comparison results validate its effectiveness. \n\n\nIn addition, we present the ablation study results over different ratios of labeled data below. \n\n\n| Confidence-based Re-weighting | Laplace Smoothing| 92 | 183 | 366 | 732 | 1464 |\n|---|---|---|---|---|---|---|\n| x | x | 68.91 | 72.10 | 73.77 | 75.65 | 78.95| \n| &#10004; | x | 68.04 | 70.67 | 74.33 | 77.31| 79.74|\n| &#10004; | &#10004; | __70.02__ | __73.16__ | __75.57__ | __78.37__ | __80.47__ |\n\n\nWe can observe that when labels are very limited ( e.g. 92 and 183 ), the vanilla confidence-based re-weighting brings about a small performance drop, yet our designed strategy alleviates it by modulating the weight distribution to a smoother one with Laplace Smoothing. On the other hand, when there is more data ( 366, 732, 1464 ), confidence-based re-weighting strategy is beneficial to the model performance, while our designed strategy pushes the performance to a higher level. \n\n\nBesides the quantitative results above, we also add the qualitative comparison results in our revised supplementary material (Appendix A.3). The re-weighting strategy leads to better performance on contour or ambiguous regions. \n\n\n__Q7: In the supplementary material the authors stated that they warm up all the NNs for 1 epoch on labeled data - what are the implications of not doing so? And why just 1 epoch? Just for fair comparisons?__\n\n\nAs depicted in Appendix A.1, following previous works, we train all the three models ( GTA, student model, and teacher model ) on labeled data for 1 epoch. After then, GTA and the student model are trained by pseudo labels and labeled data, respectively.\nHere, we present the results when taking different numbers of epoch here. Our method performs steadily when the warmup epoch varies. \n\n\n| | mIoU |\n|-----|----|\n| 0 | 72.93 | \n| 1 (Reported) | 73.16 |\n| 2 | 73.58 |\n| 3 | 73.39| \n\n\n__Q8: It would have been interesting to see what is the maximum performance the proposed method ...__\n\n\nThank you for providing this interesting experiment. We include all of the labeled and unlabeled data in PASCAL VOC and Cityscapes together to train the model and evaluate the model on PASCAL VOC, with ResNet-101 and DeepLabv3+ as the backbone. The results are shown below (the max column), which demonstrates that our method can tackle external data from other datasets. \n\n| | mIoU |\n|-----|----|\n| 662 | 77.82 | \n| 1323 | 80.47 |\n| 2645 | 80.57 |\n| 5291| 81.01 | \n| max | 85.42|\n\n__Q9: Suggestion: The \"SupOnly\" naming is kind of confusing since it's not purely supervised learning right (using only labeled data)? ...__\n\n\nSupOnly actually means using merely the labeled data to train the model. Teacher-Student means the standard teacher-student framework, which takes both labeled and unlabeled data during training. We have clarified it in revision (Table 1 in the main paper). \n\n__Q10: Typos and missed related works.__\n\nThanks. We have fixed these typos and added the related work in our revised paper.\n\n\n\n", " Sincere gratitude to you, especially for the detailed and insightful comments both on our main paper and supplementary materials. We hope our answers can address all of your concerns. We are looking forward to having further discussions with you.\n\n__Q1: Why not use the labeled data in training the GTA (supervised loss), wouldn't that yield better feature representations? Even if it were implemented ...__\n\nThank you for this insightful comment. In the following table, we try to train GTA with both labeled data and pseudo labels. Following Table 4-7 in our main paper, we take PASCAL VOC 2012, 183 labeled data, with ResNet-101 and DeepLabv3+ as the backbone, to compare the model performance. The student model is still trained with labeled data to update both its encoder and decoder, and its encoder is also updated by GTA via EMA. The teacher model is totally updated by the student model via EMA. We present the comparison results below. \n\n\n| GTA | Student | GTA mIoU | Student mIoU | Teacher ( Final ) mIoU | \n|-------|------|:------:|:------:|:------:|\n| Pseudo Labels | Labeled Data | 70.10 | 72.71 | __73.16__ |\n| Labeled Data + Pseudo Labels | Labeled Data | 71.84 | 71.75 | 72.28 |\n\n\nWe can observe that when taking both labeled data and pseudo labels to train GTA, the performance of GTA is improved by 1.74% (70.10% -> 71.84%), but the performance of the student model and teacher model (the final model we take for inference) is decreased by 0.96% (72.71% -> 71.75%) and 0.88% (73.16% -> 72.28%), respectively. It is reasonable that the GTA can learn better representation from more training data (both labeled data and pseudo labels). However, GTA will update the encoder of the student model via EMA, which is also directly learned from labeled data to update both its encoder and decoder. In this approach, the encoder of the student model will be updated by the labeled data with a limited number of images via both EMA and supervised learning, which will possibly cause overfitting and consequently, harm the student model's performance. Meanwhile, since the teacher model is purely updated by the student model via EMA, the performance of the teacher model is also harmed. Therefore, we choose to train GTA with pseudo labels alone. We have updated these discussions in the main paper. (Sec 4.4, Table 7)\n\n__Q2: The dataset splits in Tables 1 and 2 are very confusing ...__\n\n\nThese dataset splits strictly follow the previous works for a fair comparison [1][2]. We have revised the notations in Table 1 and 2. We present the number of the fine and coarse labels in dataset split of Table 2 (PASCAL VOC 2012 augmented training set) below. We can observe that this datatset split will first take fine labeled data. Suppose we require 662 or 1323 labeled data, this dataset split will only take the fine ones. When we need more than 1464 labeled data, it will take all of the 1464 fine labeled data, and then get the remaining data from coarse labeled data. \n\n\n| total | fine | coarse|\n|-----|------|-----|\n| 662 | 662 | 0 |\n| 1323 | 1323 | 0 |\n| 2645 | 1464 | 1181|\n| 5291 | 1464 | 3827 |\n\n [1] Geoff French, Samuli Laine, Timo Aila, Michal Mackiewicz, and Graham Finlayson. Semi-supervised semantic segmentation needs strong, varied perturbations. In British Machine Vision Conference, 2019 \n\n[2] Yuliang Zou, Zizhao Zhang, Han Zhang, Chun-Liang Li, Xiao Bian, Jia-Bin Huang, and Tomas Pfister. Pseudoseg: Designing pseudo labels for semantic segmentation. In International Conference on Learning Representations, 2021. \n\n__Q3: If the Student network is trained only on the labeled data, wouldn't this mean that the Student always sees fewer samples than the other two NNs, how would then the training procedure work? Algorithm 1 does not clarify this part ...__\n\n\nIn each iteration during training, we sample the same number of labeled and unlabeled data. In other words, the batch size for GTA (using unlabeled data), teacher model (using unlabeled data), and student model (using labeled data) are the same (3 for PASCAL VOC on each GPU, with 4 GPUs; and 4 for Cityscapes on each GPU, with 8 GPUs). During training, the teacher model provides pseudo labels for unlabeled data. GTA learns from these pseudo labels and updates the encoder of the student model via EMA. The student model (both encoder and decoder) further learns from labeled data with supervised training. In this way, the encoder of the student model can gain knowledge from both labeled and unlabeled data. The teacher model is then updated by the student model via EMA. We have revised Algorithm 1 to clarify the procedure. \n\n\n\n", " Sincere gratitude to you, especially for the constructive comments on our method design. We hope our answers can address all of your concerns. We are looking forward to having further discussions with you.\n\n__Q1: Why not utilize both labeled data and pseudo labels to train GTA? It does not make sense to me that fewer data can learn better feature representation.__\n\n\nThank you for this insightful comment. In the following table, we try to train GTA with both labeled data and pseudo labels. Following Table 4-7 in our main paper, we take PASCAL VOC 2012, 183 labeled data, with ResNet-101 and DeepLabv3+ as the backbone, to compare the model performance. The student model is still trained with labeled data to update both its encoder and decoder, and its encoder is also updated by GTA via EMA. The teacher model is totally updated by the student model via EMA. We present the comparison results below. \n\n\n| GTA | Student | GTA mIoU | Student mIoU | Teacher ( Final ) mIoU | \n|-------|------|:------:|:------:|:------:|\n| Pseudo Labels | Labeled Data | 70.10 | 72.71 | __73.16__ |\n| Labeled Data + Pseudo Labels | Labeled Data | 71.84 | 71.75 | 72.28 |\n\n\nWe can observe that when taking both labeled data and pseudo labels to train GTA, the performance of GTA is improved by 1.74% (70.10% -> 71.84%), but the performance of the student model and teacher model (the final model we take for inference) is decreased by 0.96% (72.71% -> 71.75%) and 0.88% (73.16% -> 72.28%), respectively. It is reasonable that the GTA can learn better representation from more training data (both labeled data and pseudo labels). However, GTA will update the encoder of the student model via EMA, which is also directly learned from labeled data to update both its encoder and decoder. In this approach, the encoder of the student model will be updated by the labeled data with a limited number of images via both EMA and supervised learning, which will possibly cause overfitting and consequently, harm the student model's performance. Meanwhile, since the teacher model is purely updated by the student model via EMA, the performance of the teacher model is also harmed. Therefore, we choose to train GTA with pseudo labels alone. We have updated these discussions in the main paper. (Sec 4.4, Table 7)\n\n__Q2: The presentation could be improved. The method section could be more concise.__\n\nWe have revised the method section in our revised paper to make it more concise and clear.\n \n\n__Q3: How to initialize GTA ...__\n\n\nAs depicted in Appendix A.1, following previous work, we train all the three models ( GTA, student model, and teacher model ) on labeled data for 1 epoch, which works as a warmup and makes the training process more stable. After then, GTA and the student model are trained by pseudo labels and labeled data, respectively.\n\n\n__Q4: Is it necessary to maintain the similarity between the parameters of GTA and the student model? If the two models are ...__\n\n\nThank you for this insightful comment. We try to maintain the similarity between GTA and the student model via L2 penalty on distances of parameters. Following Table 4-7 in our main paper, we evaluate the models with 183 labeled data on PASCAL VOC 2012, with ResNet-101 and DeepLabv3+ as the backbone. \n\n\n| | GTA | Student | Teacher|\n|-------|------|------|------|\n| Ours + Similarity | 69.55 | 71.86 | 72.47 |\n| Ours | 70.10 | 72.71 | __73.16__ |\n\n\nThe results show that maintaining the similarity between the parameters of GTA and the student model is incapable of boosting model performance. And we conjecture that since the GTA performs worse than the student model, closing the distances of all their parameters even slightly harms the performance of the student model.\n\n\n__Q5: More qualitative results (maybe in the supp) can help improve the paper.__\n\n\nThanks for your suggestions. We have added more qualitative results in our revised supplementary material (Appendix A.3). \n\n\n__Q6: Missing ablation study of EMA hyper-parameters.__\n\n\nWe present the ablation study of EMA hyper-parameters here and have added it to our revision (Table 9 in Appendix). Our method performs stably over different EMA hyper-parameters. \n\n\n| | mIoU |\n|-----|------|\n| 0.98 | 72.65 | \n| 0.99 (Ours) | 73.16 |\n| 0.999 | 73.44 |\n| 0.9999 | 73.57| \n\n\n__Q7: In Table 5, is the difference between ours and the original EMA that ...__\n\n\nThe encoder parameters of the student model are updated by GTA via EMA, and its decoder is trained by supervised learning. We have changed our expression in revision (Table 5 in the main paper).\n\n\n__Q8: Typos.__\n\n\nThanks. We have fixed these typos in our revised paper.\n", " Sincere gratitude to you. We hope our answers can address all of your concerns. We are looking forward to having further discussions with you. \n\n__Q1: The authors seemed not taken the liberty to establish statistical significance of the experimental results. Did the results the average of several experiments with different random seeds?__ \n\nWe run experiments 3 times and report the average score (Appendix A.1), with random seed = 0, 1, 2 respectively. In addition, we have reported the standard deviation (std) in our experiments in the revision, please refer to Table 1-3 in the revised paper for details. \n\n__Q2: They do not but promise to release the code upon the paper accepted.__  \n\nWe promise to release all the code and models of our method after the paper is accepted.\n", " The authors argue that the unlabeled data with pseudo labels is more beneficial to the feature extractor than the mask predictor in a semantic segmentation framework. Based on this, they introduce the Gentle Teaching Assistant model which learns from pseudo labels and (only) transfers the feature representation to the student model.\n\n ### Strengths:\n+ A novel gentle teaching assistant model that utilizes pseudo labels to aid feature representation learning.\n+ The paper is well motivated and the logic is easy to follow.\n+ Extensive ablation studies are provided.\n\n\n### Weaknesses:\n- Considering that GTA only uses pseudo labels, it is unclear to me why GTA is able to transfer better feature representation to the student model.\n- The presentation could be improved. The method section could be more concise (see the questions).\n- Not enough qualitative results (see the questions).\n\nPost rebuttal:\nI'd keep the current rating based on the rebuttal. Although the authors provided reasonable answers to the Q1 (similar concerns also appears in Reviewer sFdH), the implementation is a bit heuristic and need further discussions in the revised paper. \n * Why not utilize both labeled data and pseudo labels to train GTA? It does not make sense to me that fewer data can learn better feature representation.\n* How to initialize GTA, is it initialized randomly and trained from scratch?\n* Is it necessary to maintain the similarlity between the parameters of GTA and the student model? If the two models are very different from each other, does it make sense to transfer the knowledge through EMA?\n* More qualitative results (maybe in the supp) can help improve the paper. Currently, there is only one figure of visualization result in the supplementary and main text.\n* Missing ablation study of EMA hyper-parameters.\n* In Table 5, is the difference between ours and the original EMA that only the decoder parameters are updated? If so, maybe it’s better to express it as EMA (all parameters) and EMA (decoder).\n\nTypos:\n* line 109, is y^u a typo?\n* eq1: there is no k in p^u_{i, j}, although we can guess that k is kth term in p^u_{i, j},\n* eq3: why the CE loss is defined on data and pseudo labels rather than prediction and labels?\n* table 1: does 10582 should be 1464?\n\n The authors point out one limitation that the proposed method performs better on high-quality labeled data.", " The paper tackles the problem of semi-supervised learning (limited labeled data and a higher amount of unlabeled data) in the context of semantic segmentation. The authors adopt an auxiliary teaching assistant as an extension of the classical teacher-student paradigm. The proposed GTA module is trained solely on the unlabeled data using predictions from the original teacher. GTA facilitates feature representation learning that is further passed down to the student network without impacting the mask predictor from incorrect pseudo labels (trained supervised). Results are showcased on popular benchmarks, such as PASCAL VOC 2012 and Cityscapes. \n Strengths:\n* Although the proposed extension of the classic teacher-student paradigm is quite simple and straightforward, it is novel and effective\n* The paper tackles an interesting topic for the research community\n* The method performs considerably better w.r.t previous publications on popular benchmarks\n* Extensive experiments with recent relevant related work and ablation studies are also a plus\n* Algorithm 1, alongside Figure 1 and 2 complements the text beautifully and further clarify the procedure and enhance the authors' contribution\n\nWeaknesses:\n* Some design decisions were not fully explained - e.g. using solely pseudo labels for training GTA\n* Some experiments are incomplete - detailed in the \"Questions\" section\n* There are some doubts regarding the experimental setup and reported numbers in the tables (not clear if the methods are reimplementations or original numbers from the papers - this should be specified in the caption of the tables) - since it's stated in the paper that the comparisons are fair.\n\nOthers:\n* L107 - Typo - \"Semi-Supervised Semantic _Segmentation_\"\n* L124 - Another relevant, even pioneer relevant work for the Teacher-Student framework is [1] which definitely deserves mention.\n\n[1] Croitoru, Ioana, Simion-Vlad Bogolin, and Marius Leordeanu. \"Unsupervised learning from video to detect foreground objects in single images.\" Proceedings of the IEEE International Conference on Computer Vision. 2017. * Why not use the labeled data in training the GTA (supervised loss), wouldn't that yield better feature representations? Even if it were implemented as an alternating supervised and unsupervised training procedure for GTA it would have been an interesting experiment. \n\n* The dataset splits in Tables 1 and 2 are very confusing. Why not keep the same splits for both labeled and augmented training sets? In Table 2 what is the proportion of labeled (fine) versus unlabeled (coarse-SBD) data? \n\n* If the Student network is trained only on the labeled data, wouldn't this mean that the Student always sees fewer samples than the other two NNs, how would then the training procedure work? Algorithm 1 does not clarify this part and I suspect you can do this with different batch sizing, but did not find the batch size for any of the models. Maybe clarify this part a bit.\n\n* The authors suggest that training the student with faulty pseudo labels would hinder results. How about training GTA supervised and the Student semi-supervised? Did the authors experiment with this as well, any thoughts on what would the outcome be?\n\n* Do all NNs (GTA, Teacher, Student) have the same type of feature extractor (ResNet-101 pre-trained on ImageNet) and mask predictor (DeepLabv3+)? How about the methods in Tables 1, 2, and 3? \n\n* I do not see a clear advantage of applying the re-weighting mechanism (higher weight on higher confidence) after removing the pixels from the pseudo labels with low confidence. That proposed procedure won't solve the miscalibration or over-confidence phenomenon (detailed L40 - 46). This fact is also confirmed by the results in Table 7 (performance drop after applying re-weighting, which gets corrected after applying Laplace Smoothing). What regions benefit from applying this procedure? Did you see a gain in smaller objects or finer details? A more extensive analysis would have helped.\n\n* In the supplementary material the authors stated that they warm up all the NNs for 1 epoch on labeled data - what are the implications of not doing so? And why just 1 epoch? Just for fair comparisons?\n\n* It would have been interesting to see what is the maximum performance the proposed method could achieve given all the labeled and unlabeled data from any of the benchmarks. I do not see this experiment in any table.\n\n* Suggestion: The \"SupOnly\" naming is kind of confusing since it's not purely supervised learning right (using only labeled data)? Isn't it the standard Teacher-Student paradigm which is also semi-supervised? Also, stick to one naming convention - Teacher-Student and SupOnly are used interchangeably - Table 4 and Table 5. * The authors have addressed some of their method's limitations (addressing lower performance gains on the augmented sets) but qualitative results and comparisons with other methods and the coarse labels the authors refer to would have maybe a stronger argument - even if they were added as supplementary material.", " This paper addresses the semi-supervised semantic segmentation problem. A teaching assistant model is incorporated into the teacher-student mutual learning framework, which disentangles the effects of pseudo labels on feature extractor and mask predictor and protects the student model from the negative influences caused by unreliable pseudo labels in the mask predictor. Results on benchmark datasets show competitive performance against previous methods. The proposed method is simple yet effective. The idea of gentle teaching assistant is novel. Although decomposing the segmentation task into feature extraction and mask prediction is not new, disentangling the effects of pseudo labels for the tasks in new and interesting. The paper is well written and easy to read. The authors seemed not taken the liberty to establish statistical significance of the experimental results. Did the results the average of several experiments with different random seeds? They do not but promise to release the code upon the paper accepted. Yes, the authors have briefly discussed the limitations of the proposed method in the end of the experimental section.", " The paper proposes a semi-supervised method for semantic segmentation via a teacher-student framework. Unlike the standard approach that the teacher model generates pseudo-labels for unlabeled data, while the student model takes both labeled and pseudo-labeled data as training signals, the authors observe that pseudo-labels can be noisy and directly using them to train the student model may not be optimal. Therefore, the paper proposes to use another assistant model that takes pseudo-labels and only transfers feature representations to the student model via EMA. In addition, the paper also adopts a weighted pseudo-labeling scheme to train the assistant model. Experiments are conducted on PASCAL VOC and Cityscapes to demonstrate the usefulness of introducing the assistant model. **Strength**\n- The paper is written well and is easy to follow\n- The proposed method with the assistant model is simple and effective, which can reduce the issue of noisy pseudo-labels\n- The proposed feature transmission from the assistant model to the student model can be a generally useful tool for other tasks\n- Experiments show good performance improvements on two datasets\n\n**Weakness**\n\nSome hyperparameters and technical details are not explained very well\n- What is the \\tau value in Eq. (6)?\n- How do the authors select the threshold for pseudo-labeling, i.e., is \\gamma a constant or varied and how?\n\nExperimental results:\n- Table 4-7 are conducted on a smaller labeled set (183 images). However, I wonder how Table 4-7 would look like when there are more labeled data, e.g., >1000/2000 labeled images, which can be a more practical situation in real applications.\n- In Table 6, the teacher model performs better than the student model. It's interesting to know whether it is the same case for all experimental settings. If so, what is the intuition behind it? The authors may have more discussions on this.\n\nMissing references:\n- Adversarial Learning for Semi-Supervised Semantic Segmentation, BMVC'18\n- Semi-supervised semantic segmentation with high-and low-level consistency, PAMI'19\n- Semi-Supervised Semantic Image Segmentation With Self-Correcting Networks, CVPR'20\n- Semi-Supervised Semantic Segmentation With Pixel-Level Contrastive Learning From a Class-Wise Memory Bank, ICCV'21 Please see the above comments in Weakness. Limitations are briefly discussed at the end of the paper. However, the statements may not be correct. The authors state that the improvement in Table 2 is smaller than the one in Table 1, but their compared methods are not the same though. For example, the improvement over CutMix in Table 1 and 2 are actually similar. Also, when the labeled data is around 700 or 1400, the final results in Table 1 and 2 are similar. Although the labeled data can be noisy in the augmented set as the authors mentioned, I would suggest the authors to discuss the limitation from other perspectives." ]
[ -1, -1, -1, -1, -1, -1, 6, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "nips_2022_r70ZpWKiCW", "qAMqtd9H4l", "XYA7QGmDJnm", "IZd1J17Wc03", "ZfPY2YnGMzi", "1bc8XjJNFiZ", "nips_2022_r70ZpWKiCW", "nips_2022_r70ZpWKiCW", "nips_2022_r70ZpWKiCW", "nips_2022_r70ZpWKiCW" ]
nips_2022_JSha3zfdmSo
Faster Stochastic Algorithms for Minimax Optimization under Polyak-{\L}ojasiewicz Condition
This paper considers stochastic first-order algorithms for minimax optimization under Polyak-{\L}ojasiewicz (PL) conditions. We propose SPIDER-GDA for solving the finite-sum problem of the form $\min_x \max_y f(x,y)\triangleq \frac{1}{n} \sum_{i=1}^n f_i(x,y)$, where the objective function $f(x,y)$ is $\mu_x$-PL in $x$ and $\mu_y$-PL in $y$; and each $f_i(x,y)$ is $L$-smooth. We prove SPIDER-GDA could find an $\epsilon$-approximate solution within ${\mathcal O}\left((n + \sqrt{n}\,\kappa_x\kappa_y^2)\log (1/\epsilon)\right)$ stochastic first-order oracle (SFO) complexity, which is better than the state-of-the-art method whose SFO upper bound is ${\mathcal O}\big((n + n^{2/3}\kappa_x\kappa_y^2)\log (1/\epsilon)\big)$, where $\kappa_x\triangleq L/\mu_x$ and $\kappa_y\triangleq L/\mu_y$. For the ill-conditioned case, we provide an accelerated algorithm to reduce the computational cost further. It achieves $\tilde{{\mathcal O}}\big((n+\sqrt{n}\,\kappa_x\kappa_y)\log^2 (1/\epsilon)\big)$ SFO upper bound when $\kappa_x\geq\sqrt{n}$. Our ideas also can be applied to the more general setting that the objective function only satisfies PL condition for one variable. Numerical experiments validate the superiority of proposed methods.
Accept
This paper present an algorithm with strong theoretical guarantees for a fundamental problem of broad interest. It is well-written.
test
[ "cTcNQdT9QK5", "UdnU5vDb9f", "XlHhOUJxYM2", "zm3hiEKHNjI", "b5UKs2-iN9E", "YjIyTzf1q02", "6se_ojsGceJ", "XqbzSLAg11-", "b5W8jhjlR3", "BBb_rP4oZjL", "gjElqKwO1_J", "B3LGnFc0-zv", "-sM17u52Wfny", "GjngVzrmXCa", "nNOaK0o1L3", "WO4g3hPQ3Nv", "G4gNn4ofCoP" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank Reviewer N7QK for the time and effort. We are glad that the reviewer appreciated our clarification.", " Thank you for the clarification! I thought they helped me better understand the difference between the new algorithm and the previous ones, as well as the challenge posed by the minimax optimization (compared to minimization).\n\nGiven the response, I feel that the contribution of the current work is fairly solid, and have (temporarily) updated the overall score to 6 (weak accept) accordingly. I don't have further questions for the authors.", " Thanks to authors for providing the example of overparameterized networks and for the explanation on gradient lower bounds. I have no further questions for the authors.\n\nI still feel that the contribution of this paper is somewhat limited by the restricted applicability of the setting and will maintain my score.", " Please acknowledge the authors' reply.", " Please acknowledge the authors' reply.", " Thanks for your comment! We are glad that Reviewer vsqY have read our response to Reviewer NEh5 and found our contribution is interesting. We would like to incorporate these discussions into our later version. ", " Thanks for your reply! We are glad that Reviewer NEh5 found our results non-trivial.", " Thank you for explaining the differences between SREDA and the current paper.\n\nI found the authors' response to Reviewer NEh5's comments to be particular insightful in some of the technical contributions of this work, rather than the current response, and I think it is an interesting contribution. ", " Thanks to the authors for their response!\n\n2) I understand that, which is why I find your results not trivial.\n\nI have no further questions for the authors. If they too have nothing to add, I am ready to move on to a discussion with the other reviewers and then make my final decision.", " We thank the reviewer's effort and helpful comments.\n\nWe clarify the difference between SREDA [23] and the results in Section 6 of our paper.\n\n1. Luo et al. [23] suppose the objective function is strongly-convex in $y$ and possibly nonconvex in $x$, while the algorithms in Section 6 of our paper only suppose the objective function satisfies PL condition in $y$, which is weaker than strong concavity.\n\n2. Section 6 of our paper conduct the Catalyst acceleration to reduce the dependency on condition number, which is not considered by Luo et al. [23]. As a result, the SFO complexity shown in Theorem 6.2 of our paper only has $\\kappa\\log(\\kappa)$ dependency on condition number, which is better than $\\kappa^2$ dependency obtained by SREDA.\n\n3. The implementation of our SPIDER-GDA is much easier than SREDA. \nNote that SPIDER-GDA iterates with $x$ and $y$ simultaneously.\nHowever, SREDA requires a concave maximizer to iterate on $y$ after per update on $x$, which leads to an additional inner loop that makes the implementation be complicated.\n\nThank you for pointing out the typos. We have fixed them in revision.", " We thank the reviewer's effort and helpful comments.\n\n1. There are two-main difference between SPIDER-GDA and SVRG-AGDA.\n\n$\\quad$ a). The gradient estimators of SPIDER-GDA and SVRG-AGDA are different.\nWe omit the subscript $t$ in remain.\nFor SPIDER-GDA, the gradient estimator at $k$-th round depends on the nearest $(k-1)$-th round (see line 10-11 of Algorithm 1).\nFor SVRG-GDA, the gradient estimator at $k$-th round depends on the gradient at $0$-th round for each $k$ (line 9-10 of Algorithm 5 in Appendix C). Intuitively, the point $(x_k,y_k)$ could be too far away from $(x_0,y_0)$ for large $k$, which leads to the estimation error of SVRG-AGDA be larger than SPIDER-GDA. As a result, SPIDER-GDA has lower complexity than SVRG-AGDA.\n\n$\\quad$ b). The update schemes of SPIDER-GDA and SVRG-AGDA are different.\nSVRG-AGDA use the alternating update, that is\n$x_{k+1} = x_k - \\eta_x G_x(x_k,y_k)$ and $y_{k+1} = y_k + \\eta_y G_y(x_{k+1},y_k)$. \nIn contrast, SPIDER-GDA use simultaneous update, that is $x_{k+1} = x_k - \\eta_x G_x(x_k,x_k)$ and $y_{k+1} = y_k + \\eta_y G_y(x_k,y_k)$. \nThe convergence analysis of simultaneous-type algorithm SPIDER-GDA is much \nsimpler than SVRG-AGDA.\nNote that proof of Theorem D.1 in our paper is much simpler than the proof of similar result for AGDA-SVRG (page 26-28 in arXiv:2002.09621, which is the full version of [42]). \nAdditionally, we also provide GDA-SVRG in Appendix C (the simultaneous-type algorithm with SVRG estimator), which has the same order of complexity as AGDA-SVRG [42]. Similarly, the analysis of GDA-SVRG in our framework is much simpler than AGDA-SVRG.\n\n2. The techniques of our paper are quite different with other setting of stochastic first-order optimization. \nCompared with minimization problem, the minimax problem studied in our paper is more difficult since we have to consider two variables. Concretely, the analysis of proposed SPIDER-GDA target to show $A_k = g(x_k) - g(x^*)$ and $B_k = f(x_k,y_k) - g(x_k)$ converge to zero simultaneously. \nWe also requires the gradient estimators well approximates $(\\nabla g(x_k), - \\nabla_y f(x_k,y_k))^\\top$, rather than only showing they approximate $(\\nabla_x f(x_k,y_k), -\\nabla_y f(x_k,y_k))^\\top$, where $g(x) = \\max_y f(x,y)$. \nThe difference between $g(x_k)$ and $f(x_k,y_k)$ makes our theoretical analysis more challenging.\nAnother related problem is solving nonconvex-strongly-concave minimax problem by stochastic first-order algorithms. Please see the response to ``Reviewer vsqY'' for the discussion.\n\n3. Our paper presents SPIDER-GDA by using mini-batch size $\\mathcal{O}(\\sqrt{n})$ and the stepsizes $\\tau_x=\\mathcal{O}(1/ (\\kappa_y^2 L))$, $\\tau_y=\\mathcal{O}(1/L)$. \nThe same SFO upper complexity of SPIDER-GDA also can be obtained by using mini-batch size $\\mathcal{O}(1)$ and stepsizes $\\tau_x=\\mathcal{O}(1/(\\sqrt{n} \\kappa_y^2 L))$, $\\tau_y=\\mathcal{O}(1/(\\sqrt{n} L))$. General speaking, using larger stepsizes could reduce the number of iteration, but each of iteration requires the larger mini-batch size.\nThe total number of SFO oracle calls will not be changed by such adjustment if we balance the parameter of batch-size and stepsizes appropriately.\nThe main reason of SPIDER-GDA can improve the factor from $n^{2/3}$ to $\\sqrt{n}$ is owing to the different type of gradient tracking, which is discussed in 1.\n\n4. For nonconvex minimization, a plenty of SVRG-type algorithms have $n^{2/3}$ factor in their SFO upper bound complexity. \nHence, we think it is reasonable that SVRG-AGDA also needs $n^{2/3}$ dependency.\nTo the best of our knowledge, there is no rigorous theory to show whether $n^{2/3}$ dependence is unavoidable for SVRG-type algorithms (even for minimization, there is no such theory). We think this is an interesting problem in feature direction.\n\n5. Thank you for pointing out the typos. We have fixed them in revision.\n", " We thank the reviewer's effort and helpful comments.\n\n1. We thank the review so much for pointing out the valuable references of PAGE and SARAH. We are happy to cite these papers in revision.\nOur work has not considered the loopless modification, \nbut we strongly agree that such variant has potential to simplify the analysis and implementation. \nWe are willing to explore this point as future works. \n\n2. The existing results $\\mathcal O(n\\cdot L/\\mu)$ and $\\mathcal O(n^{2/3}\\cdot L/\\mu)$ of saddle point problems are obtained by alternating-type algorithms AGDA and SVRG-AGDA. These methods depend on the update rules of\n$x_{t+1} = x_t - \\eta_x G_x(x_t,y_t)$ and $y_{t+1} = y_t + \\eta_t G_y(x_{t+1},y_t)$, where $G_x$ and $G_y$ are the gradient (or gradient estimators) of $x$ and $y$ respectively. In contrast, our paper focus on the simultaneous gradient descent ascent, that is $x_{t+1} = x_t - \\eta_x G_x(x_t,x_t)$ and $y_{t+1} = y_t + \\eta_y G_y(x_t,y_t)$. \nThe convergence analysis of simultaneous-type algorithm is so \nconcise and its framework is quite different from previous work [42].\nAs a comparison, we provide $O(n^{2/3}\\cdot L/\\mu)$ complexity for GDA-SVRG (simultaneous-type algorithm with SVRG estimator) in Appendix C. \nThe proof of Theorem C.1 in our paper is much simpler than the proof of similar result for AGDA-SVRG (page 26-28 in arXiv:2002.09621, which is the full version of [42]). \nSimilarly, the convergence analysis of proposed SPIDER-GDA with $\\mathcal O(\\sqrt{n}\\cdot L/\\mu)$ complexity is also concise.\nAlthough it is possible to design SARAH/SPIDER-type algorithm for AGDA to obtain $\\mathcal O(\\sqrt{n}\\cdot L/\\mu)$ complexity, the convergence analysis may be so complicated.\n\n3. One of the popular applications of minimax problem with PL condition is AUC maximization within overparameterized neural network [13, 22]. Its objective function satisfies two-sided PL condition and has no convexity. Liu et al. [22] provided justifications for PL condition of AUC model with one hidden layer neural network in Appendix A.7 of their paper. The discussion of PL condition for more general case of deep AUC model can be found in Section 4 of https://arxiv.org/pdf/2006.06889.pdf.\n\n4. Direct acceleration without envelopes looks non-trivial in our setting. We agree that it could be explored as future works.", " We thank the reviewer's effort and helpful comments.\n\n1. The model of AUC maximization within overparameterized neural network [13, 22] is a good sample. Its objective function satisfies two-sided PL condition and has no convexity. Liu et al. [22] provided justifications for PL condition of AUC model with one hidden layer neural network in Appendix A.7 of their paper. The discussion of PL condition for more general case of deep AUC model can be found in Section 4 of https://arxiv.org/pdf/2006.06889.pdf.\n\n2. The gradient lower bounds for minimax optimization with PL condition is still an open problem. To the best of our knowledge, even for the minimization problem, the optimality of first-order methods under PL condition is unclear. We believe this is an interesting topic in future.", " This paper studies finite-sum minimax optimization problems under PL conditions. The SPIDER-GDA algorithm, which uses simultaneous Gradient Descent-Ascent as the backbone and SPIDER as its variance reduced gradient estimator, is proposed and analyzed, enjoying better guarantees than SVRG-based approaches. Catalyst acceleration and the one-sided PL extensions are also discussed. Strengths:\n- The complexity guarantees improve the current state-of-the-art for both the two-sided and one-sided settings.\n- The algorithms are fairly easy to understand and implement. The two-sided and one-sided settings are handled with the same algorithms with different hyper-parameters.\n\nWeaknesses:\n- The setting for this paper is somewhat niche, considering the finite-sum case with PL conditions and no convexity. What are some good examples or justifications for this setting? (The numerical experiments in Sec. 7, for instance, are quadratic and convex-concave). Is it possible to provide gradient complexity lower bounds for this setting? Limitations of this work are adequately discussed.", " The paper is devoted to non-convex stochastic (finite sum) saddle point problems under PL assumption (two-side and one-side). The paper proposes a new method based on the famous variance reduction SPIDER. A modification with acceleration via Catalyst is also given. **Strengths:**\n\n1) Solving non-convex stochastic problems is an important task. The paper is relevant and interesting.\n\n2) It is easy for me to follow the paper. The literary review is done on a quite good level.\n\n3) The paper improves on existing bounds in the literature. \n\n4) In my opinion, the way to get the result is not trivial. But perhaps if one use L-SVRG or PAGE (loopless modifications of SVRG and PAGE) as a base it would be easier.\n\n**Weaknesses:**\n\n1) Literature review: if we are talking about SPIDER it has a lot of analogues in the literature -- see Table 1 from\n\nLi, Z., Bao, H., Zhang, X. Richtarik, P. PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization.\n\nI definitely ask for a citation of the next work, because it seems that SPIDER is just a version of SARAH: \n\nNguyen, L. M., Liu, J., Scheinberg, K., and Takac, M. SARAH: A novel method for machine learning problems using stochastic recursive gradient.\n\n2) The result of the paper is expected if one knows what estimates look like for minimization problems under PL conditions: \nGD --- SVRG --- SPIDER: $\\mathcal{O}\\left( n \\cdot L/\\mu\\right)$ --- $\\mathcal{O}\\left( n^{2/3} \\cdot L/\\mu\\right)$ --- $\\mathcal{O}\\left( \\sqrt{n} \\cdot L/\\mu\\right)$.\nFor saddle point problems we observe the same dependencies on $n$ -- GD and SVRG the results were already known.\n\n3) The PL condition does not seem to be particularly popular and widely researched (for saddle point problems), and its applicability to practical problems is slightly blurred (despite the fact that the authors give references to some works)\n\n**Verdict before discussions:**\n\nThe work makes an improvement on the available results for an important task but in not the most popular assumptions. The results are expected and understandable. I wouldn't be opposed to this work being presented at the conference, but I wouldn't be upset if it not. Borderline accept 1) Did the authors try to consider loopless variants of the SVRG (L - SVRG) and SPIDER (PAGE) methods? For minimization problems the proofs for them are simpler in my opinion. Perhaps it would be the same here. If such variants have not been considered, it could be explored as future works.\n\n2) For example, SVRG methods have a direct accelerated version - Katyusha or L-Katyusha. Did the authors try to design methods with direct acceleration, without envelopes? If not, it could be explored as future works. No limitations and potential negative societal impact", " The paper studies minimax optimization under Polyak-Łojasiewicz (PL) conditions. Given a function $f(x, y) = \\frac{1}{n}\\sum_{i=1}^{n}f_i(x, y)$ that is $\\mu_x$-PL in $x$ and $\\mu_y$-PL in $y$, assuming that each $f_i$ is $L$-smooth, the algorithm provably finds an $\\epsilon$-approximate solution for $\\min_x\\max_y f(x, y)$ using $O((n + \\sqrt{n}\\kappa_x\\kappa^2_y)\\log(1/\\epsilon))$ queries to a stochastic first-order oracle, where $\\kappa_x = L/\\mu_x$ and $\\kappa_y = L/\\mu_y$ are the condition numbers. Prior bounds have a larger factor of $n^{2/3}$ instead of $\\sqrt{n}$ in front of the condition number. The authors also present an accelerated algorithm, which achieves a better query complexity of $\\tilde O(\\kappa_x\\kappa_y\\sqrt{n}\\log^2(1/\\epsilon))$ in the $\\kappa_x \\ge \\sqrt{n}$ case.\n\nThe first algorithm, termed \"SPIDER-GDA\", is a variance-reduced version of alternating gradient descent ascent. The second algorithm proceeds by using SPIDER-GDA as a basic solver, and applying Catalyst acceleration to it.\n\nIn addition, the authors discuss an extension of the techniques to the setting where the function only satisfies the PL condition in $y$ but not $x$. Experiments are performed on synthetic data (quadratic functions), and the proposed algorithms are shown to converge faster than a baseline of Yang et al. (2020). Strengths: The paper is well written. I think the authors did a good job in motivating the problem, rigorously defining the problem setting, and presenting the results. Compared to the prior work of Yang et al. (2020), the query complexity of the new algorithms improves by a polynomial factor in theory, and is also shown to outperforming empirically. Overall, this is a solid and well-presented work.\n\nWeaknesses: On the negative side, the current paper is lacking in novelty and significance. Considering that: (1) the techniques used in the paper (variance reduction, alternating gradient descent ascent, Catalyst acceleration) are all well-known in the optimization literature; and (2) the query complexity obtained could still be far from optimal, I doubt whether the present work meets the bar.\n\nThe presentation could be further improved if the authors give more intuition/explanation behind the improvement from $n^{2/3}$ to $\\sqrt{n}$; see questions below.\n\nIn addition, there are a couple of possible math typos:\n- Page 1, Line 3: The optimization problem differs from (1) by a factor of $n$.\n- Page 1, Line 10: Should $\\log(1/\\epsilon)$ be $\\log^2(1/\\epsilon)$ instead?\n- Page 1, Line 10: It might be better to clarify what $\\tilde O$ notation hides: polylog factors in $\\kappa_x$, $\\kappa_y$ but not $1/\\epsilon$?\n- Page 4, Line 91: Extra $L^2$ in parentheses?\n- Page 7, Table 3: Missing \")\" for SPIDER-GDA, first case I have a few clarifying questions regarding the novelty and significance of the work, and I might update the rating based on the answer:\n\n1. What are the main differences between SPIDER-GDA and the SVRG-AGDA algorithm of Yang et al. (2020)? (E.g., is it about using a larger minibatch of size $B \\approx \\sqrt{n}$ instead of $B = 1$?)\n\n2. Are these changes novel? E.g., are they applied in other settings of stochastic first-order optimization? And if so, does this work involve a novel/different analysis?\n\n3. To what extent are these changes necessary? E.g., is there evidence proving/suggesting that the $n^{2/3}$ dependence is unavoidable for SVRG-AGDA? The main limitation is that the optimality of the proposed algorithms remains unclear. This is stated by the authors in Section 8.", " This work studies a new algorithm for finite sum smooth minimax optimization which has guarantees for a certain class of nonconvex-nonconcave minimax problems, which is formalized by a Polyak-Lojasiewicz (PL) condition. This problem has been previously studied by [42], and the PL condition provides a broad class of optimization problems which are of deep interest to machine learning, including, e.g., deep AUC maximization. The new algorithm shows that a technique related to SPIDER, which reduces the variance of SGD by estimating differences in the SGD iterates, can improve the stochastic first order (SFO) oracle complexity for this problem by improving the previously known bound of $O((n + n^{2/3} \\kappa_x \\kappa_y^2)\\log\\frac1\\epsilon)$ to $O((n + n^{1/2} \\kappa_x \\kappa_y^2)\\log\\frac1\\epsilon)$. Further improvements are provided for ill-conditioned instances. The obvious strength of this work is that it provides a direct improvement in the SFO oracle complexity for the smooth minimax optimization problem under PL conditions by improving previous results by poly(n) factors, which is significant. Because I do not work in the optimization literature, I could not tell what technical innovations were necessary to make this result possible, and it was not clear to me from the discussion of the paper either. In particular, the only conceptual message I got from the work was that “SPIDER-type algorithms can improve algorithms for PL minimax optimization beyond the previous SVRG-based algorithm.” This is certainly an important message, but I would appreciate more discussion and intuition on what this result implies in the broader theory of minimax optimization algorithms. For example, the authors point out previous works that apply SPIDER-type algorithms to minimax optimization, under different assumptions on the function being optimized, and mention that the technical details are different since earlier works converge sublinearly while the current work converges linearly; however, it is not clear to me how significant this difference is, since such differences already appear in basic analyses of vanilla gradient descent for convex vs strongly convex functions. In particular, SREDA in Luo et al. [23] achieve rates of the form $O(n\\log(\\kappa/\\epsilon) + n^{1/2}\\kappa^2 / \\epsilon^2)$, which seems very similar to the guarantee of the current paper, and a further discussion of the differences would be appreciated. \n\nPros\n\n- Direct improvements over previous work on an important problem in minimax optimization theory.\n- Extremely detailed and clean presentation of proofs in the supplementary material.\n\nCons\n\n- Lack of discussion which places the innovations in this work in the context of other work in minimax optimization. Minor comments\n\n- Defintion 3.1: should say $y\\in\\mathbb R^{d_y}$\n- Assumption 3.1: is there an extra $L^2$ in the parentheses?\n- Line 427 (supplementary): typo “us a question thta whether” This is largely a theoretical paper and has no potential for negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 2, 2 ]
[ "UdnU5vDb9f", "gjElqKwO1_J", "-sM17u52Wfny", "gjElqKwO1_J", "-sM17u52Wfny", "XqbzSLAg11-", "b5W8jhjlR3", "BBb_rP4oZjL", "B3LGnFc0-zv", "G4gNn4ofCoP", "WO4g3hPQ3Nv", "nNOaK0o1L3", "GjngVzrmXCa", "nips_2022_JSha3zfdmSo", "nips_2022_JSha3zfdmSo", "nips_2022_JSha3zfdmSo", "nips_2022_JSha3zfdmSo" ]
nips_2022_mTXQIpXPDbh
Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation
Transfer learning from the model trained on large datasets to customized downstream tasks has been widely used as the pre-trained model can greatly boost the generalizability. However, the increasing sizes of pre-trained models also lead to a prohibitively large memory footprints for downstream transferring, making them unaffordable for personal devices. Previous work recognizes the bottleneck of the footprint to be the activation, and hence proposes various solutions such as injecting specific lite modules. In this work, we present a novel memory-efficient transfer framework called Back Razor, that can be plug-and-play applied to any pre-trained network without changing its architecture. The key idea of Back Razor is asymmetric sparsifying: pruning the activation stored for back-propagation, while keeping the forward activation dense. It is based on the observation that the stored activation, that dominates the memory footprint, is only needed for backpropagation. Such asymmetric pruning avoids affecting the precision of forward computation, thus making more aggressive pruning possible. Furthermore, we conduct the theoretical analysis for the convergence rate of Back Razor, showing that under mild conditions, our method retains the similar convergence rate as vanilla SGD. Extensive transfer learning experiments on both Convolutional Neural Networks and Vision Transformers show that Back Razor could yield up to 97% sparsity, saving 9.2x memory usage, without losing accuracy. The code is available at: https://github.com/VITA-Group/BackRazor_Neurips22.
Accept
This paper focuses on pruning the backpropogation activation to reduce the memory footprint in transfer learning. The paper is well structured and the method is simple to understand. All the reviewers acknowledge that the experimental results are convincing. Overall, the meta-reviewer recommends acceptance of the paper.
train
[ "nQQaJ8ZSKab", "hm_x6tm8h1e", "qY5HyiDIwcf", "plBt6BUgu2U", "85QTV0m4dP", "WjQdCdi9gZ", "CqY4duQNm0Z", "z7IKMMnMSnn", "HmK0Kzn1rPz", "Z3_R1DKySMW", "146z1WKCTE6k", "qtKKF3AFMUZ", "7STfVHeLky0-", "-vs-KHuYFG", "LTnSZx1eSMA", "UiYpe5NIgvW", "ECi-8e8JXdW", "26d9oNy3aQY", "_EtR1RuPIz2", "Bhr3wSg9HS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate your reply to address my concerns. Although there is no experimental results about NLP, authors answered all my questions thoroughly. So, I maintain my score to acceptance.", " Thanks for your response. We conduct new experiments by adapting the channel-wise structural sparsification on BackRazor with ProxylessNAS-Mobile. As shown in the following Table, in Flowers, compared to the unstructural Back Razor@90%, the accuracy drops for the structural Back Razor@90%. However, the structural Back Razor@80% can still surpass TinyTL@320 with higher accuracy and lower memory usage. \n\n| Method | Training Memory | Flowers |\n|-------------------------------|-----------------|----------|\n| FT-Last | 31MB | 90.1 |\n| FT-Norm+Last | 192MB | 94.3 |\n| FT-Full | 366MB | 96.8 |\n| TinyTL | 37MB | 95.5 |\n| TinyTL@320 | 65MB | 96.8 |\n| Back Razor@90% (unstructural) | 42MB | 97.1 |\n| Back Razor@80% (structural) | 56MB | 96.9 |\n| Back Razor@90% (structural) | 42MB | 96.2 |\n", " Thanks for your response. You have addressed most of my concerns. Based on your response, the proposed method adopts unstructured activation spasification and then structuralize the pruned activation. My suggestion is that you can directly use structured sparsification methods, the structuralization time can thus be saved. Please correct me if I misunderstood your structuralization step.", " Thank you very much for raising the score!", " Dear Reviewer RTcC,\n\nSince the discussion section has been started for a few days, it would be highly appreciated if you could look at the above responses and reply. In this way, if you still have concerns, we could have time to address them before due of the discussion section.\n\nWe would also highly appreciate it if you could consider raising the score if our response has addressed your concerns.\n\nThank you very much for your time and efforts.", " Dear Reviewer i5p7,\n\nSince the discussion section has been started for a few days, it would be highly appreciated if you could look at the above responses and reply. In this way, if you still have concerns, we could have time to address them before due of the discussion section. \n\nWe would also highly appreciate it if you could consider raising the score if our response has addressed your concerns.\n\nThank you very much for your time and efforts.\n", " Dear Reviewer agR8,\n\nSince the discussion section has been started for a few days, it would be highly appreciated if you could look at the above responses and reply. In this way, if you still have concerns, we could have time to address them before due of the discussion section. \n\nWe would also highly appreciate it if you could consider raising the score if our response has addressed your concerns.\n\nThank you very much for your time and efforts.\n", " Dear paper authors,\n\nThank you for adding the new results and incorporating other feedback. Since this fixes my major concern about the paper, I will raise my score.", " Thanks for the quick response. The new experimental results have been added to the manuscript and appendix (marked in blue). We are running the missing baselines and would add them once done.", " I appreciate the authors' diligence in responding to my and other reviewers' concerns and running additional dataset experiments. I will respond to the rebuttal in more detail. However, my primary concern was the lack of experimental evidence in the paper, and it seems that the new experimental results were not added to the updated manuscript. I respectfully request the authors to do so.", " ## Writing\n### 1. Details\nWe have followed your suggestion to add a clear description on the `TopK` operator, the theoretical assumptions and reasoning in the revision, and we have revised the mathematical notion on $z_l$ to make it more clear. \n\n### 2. Broad Claims\nWe admit the de facto choice is too strong a claim. We have rephrased the claim in the draft by changing “de facto choice” to “widely used”. There is a typo on “transfer learning always happens on personal computing devices”, it should be “many transfer learning happen on personal computing devices”. However, these are not the main claim of the paper.\n\n### 3. Typos\nThank you for your suggestion. We have proofread our manuscript again and done a revision, which has been highlighted with blue text in the new draft.\n", " ## Experiments\n### 1. baseline, architecture choosing, and error bar\n\n* We argue that the chosen benchmarks are representative and at the SOTA level. The chosen two networks are representative of the mobile CNN(Convolutional Neural Networks) (ProxylessNAS-Mobile) and the Vision Transformers (ViT-B/16). They are also very different from each other: one is based on convolutional layers and the other is based on attention mechanism. Good performance on these two networks indicates the proposed Back Razor can work on a wide range of networks.\n* For the benchmark choosing, we follow the benchmark of TinyTL [1] (Neurips, 2020) for ProxylessNAS-Mobile. We compare with BitFit [2] (ACL, 2022) and [3] (Arxiv, 2022) for ViT-B/16. All of them are recent publications. Two of these works are also accepted to top conferences. We believe this demonstrates the chosen baselines are at SOTA level,\n* We offer the error bar for Back Razor@90% and FT-Full in Table 1 by running five times with different random seeds. The proposed Back Razor is very robust with a small standard deviation of [0.11, 0.16, 0.07, 0.05] for [Pets, Aircraft, CIFAR10, CIFAR100], respectively.\n\n### 2. Experimental coverage\nWe want to refer you to the common response, where we add more empirical results. Specifically, we conduct experiments on four more datasets for ProxylessNAS-Mobile and five more datasets for ViT-B/16. The CelebA experiment requires the pre-train model on VGGFace2. As we cannot finish the pre-train in the short rebuttal window, we skip the experiment on this dataset. Moreover, we also explore different batch size settings and a new optimizer choice:\n\nTo verify the proposed Back Razor can work for different batch size. We follow TinyTL conducting experiments with a batch size of 1 for ProxylessNAS-Mobile. As shown in the following table. The proposed BackRazor can achieve higher performance with less memory usage compared to TinyTL under a batch size of 1.\n| Method | Training Memory | Flowers |\n|-----------------------|-----------------|----------|\n| FT-Full | 34MB | 96.4% |\n| TinyTL | 18M | 96.1% |\n| Back Razor@80% (ours) | 15MB | 96.4% |\n \n\nTo verify the proposed Back Razor can work for different optimizers. We conduct experiments for ProxylessNAS-Mobile on SGD optimizer (the previous optimizer is adam). As shown in the following table, the proposed Back Razor can achieve comparable performance with less memory usage compared to fully fine-tuning with SGD optimizer.\n| Method | Training Memory | Flowers |\n|-----------------------|-----------------|----------|\n| FT-Full | 366MB | 97.1% |\n| Back Razor@80% (ours) | 42MB | 96.7% |\n\nFor other experiments (including the experiments on more tasks as well as the quantization of CNN baselines), we promise to add these experiments in the future version given that the rebuttal window is short.\n\n### 3. On-device memory usage in the case of the CNN architecture\n\nThe on-device (GPU) memory usage of ProxylessNAS-Mobile is illustrated in the following Table. With a batch size of 8, the memory of Back Razor@90% is comparable with the baseline. For a larger batch size of 128, Back Razor@90% can save 759MB of memory compared to baseline. There is still space for optimizing the implementation as it does not achieve the theoretical performance. \n\n| batch size | baseline | Back Razor@90% |\n|------------|----------|----------------|\n| 8 | 1655MB | 1637MB |\n| 128 | 7393MB | 6634MB |\n\n\n## Theory\n### 1. Convergence concern\n\n We clarify that Theorem 1 gives only the convergence guarantees of Back Razor for MLPs and CNNs but not the quality of models. As we have observed that convergence speeds are similar, we believe that our theory is well-applied in our cases. In the view of experiments, we show the performance with batch sizes up to 128 and have demonstrated superior performance under the memory-constrained setting in our manuscript, and we believe such batch size can be considered as ``large’’ so our method is working. We are open to clarifying any further confusion on this point. \n\n[r1] ON LARGE-BATCH TRAINING FOR DEEP LEARNING: GENERALIZATION GAP AND SHARP MINIMA\n\n\n", " ### 1. Concerns about the memory usage computing\n\n* To clarify, the memory usage reported in the paper is all the peak memory. We totally agree with the reviewer that peak memory determines the memory requirement for computing devices. \n\n* It is worth noting that the activations are sparsified during the forward propagation. As shown in Algorithm 1, in the forward pass, we would prune and save the sparsified activation $\\tilde{z}_{i-1} $ each time it finishes the forward on layer $f_i$. \nIn contrast, the traditional algorithm requires saving the full precision activation.\n\n### 2. The discrepancy between theoretical memory use and practical memory use\n\nGood question. Directly sparsifying the activation in the original format cannot lead to memory saving. To address this, we structuralize the pruned activation before saving it. As described in lines 165-169, the sparse matrix would be saved with a bitmap and a much smaller dense matrix. This makes savings of sparse activation memory efficient. \n\n### 3. Comparison with gradient checkpointing\n\nThanks for pointing out this relevant work. When applying the gradient checkpointing on the ProxylessNAS-Mobile, it can achieve a memory usage of 96.1MB with the same accuracy as the fine-tune baseline (as it wouldn’t change the backward results). In comparison. Our method can be more memory efficient with 42MB memory (Back Razor@90%) with comparable accuracy (as shown in Table 1). Moreover, our method requires no extra computational cost while gradient checkpointing requires one more forward pass. Also, it is worth noting that the proposed method can be combined with gradient checkpointing to further improve memory efficiency.", " ### 1. Proof-reading\nThanks for pointing it out. We have proofread our manuscript again and done a revision, which has been highlighted with blue text in the new draft.\n\n### 2. More experiments on large models such as ViT-L/16\n\nGreat suggestion. The comparison between the proposed Back Razor and fully fine-tune baseline on ViT-L/16 is illustrated in the following table (employ training setting). The proposed Back Razor can achieve comparable performance with FT-Full for both CUB200 and Flowers datasets. Remarkably, it can even surpass CUB200 while being more memory efficient.\n| dataset | Memory | CUB200 | Flowers |\n|----------------|-----------|--------|---------|\n| FT-Full | 51257.0MB | 86.6% | 99.6% |\n| Back Razor@90% | 12270.4MB | 86.9% | 99.5% |\n\n\n### 3. Plot the memory footprint of training\n\nThis is a nice suggestion. We have added this to our draft. \n", " ### 1. Novelty Concern\n\nWe respectfully disagree this is an engineering trick. We think the engineering trick is describing the technique with very predictable performance. However, it is not clear whether the activations stored for backward can be sparse. And our work has empirically and theoretically proven it.\n\n### 2. More downstream tasks\n\nNice suggestion. We want to refer you to the common response, where we include more experiment results on more downstream datasets in that section.\n\n### 3. Explain the theoretical results\n\nThanks for pointing this out. There is a typo for the sentence in Lemma 1, it should be “so pruning the activation stored for backward does not change its gradient”. We have addressed this typo in the updated version.\n\n### 4. Typo in Figure 2\n\nThanks for pointing this out. We have addressed this typo.\n\n### 5. Memory difference between activations in forward pass and backward pass\n\nDuring the forward pass, the activations would be saved for the following backward pass. In contrast, during the backpropagation, it would only use the previous activation for computing gradient and the memory would not further increase. Therefore, the memory would reach the peak at the end of the forward pass during training.\n\n### 6. How can pruning activations in backpropagation reduce memory by more than 2x\n\nTo clarify, Back Razor happens in the forward pass. Specifically, it would prune the activation of layer $n-1$ and store the pruned version for backward when the forward proceeds to layer $n$. \nAs discussed in the last question, the activations that require large memory usage are all in the forward pass, pruning in the forward pass can reduce memory by more than 2x.\n\n### 7. Have you tried on language transformers?\n\nNo, we haven’t. But this is a great suggestion, given the rebuttal time window is too short, we would extend Back Razor to language transformer tasks in the future version.\n", " We thank all reviewers for their constructive comments. We updated the draft (the changes are marked in blue). Here, we add more experiments to support our claim as requested by reviewers agR8 and 3voy. And we also want to highlight these new results to all reviewers.\n\nWe start by extending the experiments on ProxylessNAS-Mobile with four more datasets. As shown in the following table, the proposed Back Razor can surpass TinyTL by [1.6, 4.8, 4.2, 2.4] for [Flowers, Cars, CUB, Food], respectively under comparable memory consumption. Moreover, it can also surpass TinyTL@320 for most datasets while saving more memory.\n| Method | Training Memory | Flowers | Cars | CUB | Food |\n|-----------------------|-----------------|----------|-------|------|-------|\n| FT-Last | 31MB | 90.1 | 50.9 | 73.3 | 68.7 |\n| FT-Norm+Last | 192MB | 94.3 | 77.9 | 76.3 | 77.0 |\n| FT-Full | 366MB | 96.8 | 91.2 | 81.3 | 83.8 |\n| TinyTL | 37MB | 95.5 | 85.0 | 77.1 | 79.7 |\n| TinyTL@320 | 65MB | 96.8 | 88.8 | 81.0 | 82.9 |\n| Back Razor@90% (ours) | 42MB | 97.1 | 89.8 | 81.3 | 82.1 |\n\nWe also extend the experiments on ViT-B/16 with five more datasets. As shown in the following table, the proposed Back Razor can achieve comparable performance with FT-Full with much less memory usage. It is also worth noting that the proposed Back Razor can even improve the accuracy of CUB and Food. More baseline comparisons will be included in the future version.\n\n| Method | Training Memory | Flowers | Cars | Aircraft | CUB | Food |\n|-----------------------|-----------------|----------|-------|----------|-------|-------|\n| FT-Full | 19235MB | 99.5% | 85.5% | 78.8% | 85.5% | 90.3% |\n| Back Razor@80% (ours) | 4565MB | 99.4% | 84.0% | 77.3% | 86.3% | 90.5% |\n\n", " * This paper considers the scenario that one want to not only deploy pre-trained models but also continue train/finetune these models on local devices. It proposes a novel method called Black Razor that reduces the training/finetuning memory of neural networks, by pruning/compressing the activation during back-propagation. BlackRazor is able to achieve 96% sparsity, saving 9.2x memory without losing accuracy. The paper also conducts a theoretical analysis of BlackRazor's convergence. Strength:\n* The proposed method seems to have great results: successfully reducing much training memory without much loss of accuracy.\n* The proposed method seems to be simple and generalizable. \n\nWeakness:\n* One weakness of this paper is the novelty of the proposed method. The method is to conduct magnitude pruning on activations for back-propagation. This seems to be more like an engineering trick rather than a science / learning paper.\n* This paper will be more solid if it’s validated on more downstream tasks. \n* The theoretical results are a bit confusing: for example, in algorithm 1, it prunes activations during forward pass, yet in Lemma 1, it states that “so pruning the activation during backward does not change its gradient”.\n * Typo in Figure 2: “Storeage”\n* What is the memory difference between activations in forward pass and backward pass? How can pruning activations in backpropagation reduce memory by more than 2x?\n* Have you tried on language transformers? N'/'A", " The paper proposes a simple and memory-efficient method called Back Razor that prunes activations only for back-propagation, presents a theoretical analysis that the convergence rate of Back Razor can be similar to that of SGD, and shows the effectiveness of Back Razor on CNN and Vision Transformer. Strengths\n\n(1) The paper is well structured and the main idea is clearly explained.\n\n(2) The extensive experiments demonstrate the efficacy of Back Razor to some extent.\n\nWeaknesses\n\n(1) Proof-reading is required due to some errata and awkward expressions.\n\n(2) More experiments on large models such as ViT-L/16 seem to be needed to verify whether Back Razor is really working well on large pre-trained models. In Figure 3, it would be better if the memory footprint of training as well as accuracy is plotted when comparing pruning both forward and backward with pruning only backward. As mentioned in the paper, the authors plan to apply Back Razor to several downstream tasks.", " To reduce memory use of training a transfer learning model, the paper proposes to prune activations stored for back-propagation, instead of traditionally pruning activations stored for both forward-propagation and back-propagation. Experiments on CNN and ViT show that the proposed method achieves comparable accuracy with more aggressive memory reduction compared to competitive methods TinyTL, Bitfit, and Mesa. Strengths.\n1) In Section 1 and Section 3.1, the paper provides clear explanations on why the paper focuses on compressing activations stored for back-propagation. \n2) In Section 3.2, the proposed techniques are clearly written and reasonable. Theoretical analysis on convergence is also provided in Section 3.3.\n3) Experiments on CNN and ViT are convincing. For ViT, the paper provides comparison between theoretical memory use and practical memory use in Table 2 and Table 3, respectively. \n\nWeaknesses.\n1) I think the authors should provide peak memory use in the experiments. Since the method prunes activations stored for back-propagation, I agree with the authors that the theoretical memory use and averaged memory use are both decreased. However, as the activations are not sparsified during forward-propagation, I am not sure whether peak memory use is decreased compared to previous methods. If the peak memory use is not decreased, the proposed method actually requires computing devices with the same memory capacity as previous methods.\n2) A potential problem of the method is discrepancy between theoretical memory use and practical memory use, as shown in Table 2 and Table 3. The problem is incurred by the adopted unstructural sparsification strategy, pruning the smallest magnitude activations, as introduced in Section 3.2. According to memory working mechanism, memory is accessed in column-wise or row-wise manner. A column/row is accessed even if there is only one non-zero element. Therefore, unstructural sparsification cannot effectively practical memory use.\n3) I think the traditional method gradient checkpointing is very related to this paper. Is it possible to qualitatively or quantitatively compared with this method in the experiments? 1) Is it possible to provide peak memory use during training?\n2) Have you tried structural sparsification strategies?\n3) Is it possible to qualitatively or quantitatively compared with the traditional method gradient checkpointing in the experiments?\n As I mentioned above, a major limitation is the adopted unstructural sparsification strategy which incurs discrepancy between theoretical memory use and practical memory use.", " Back Razor is a memory-efficient algorithm for transfer learning on edge devices, which works by using sparse activations during the backward pass of neural network training. The paper supports its claims with experimental validation on four image classification datasets and two architectures (CNN and ViT), as well as some theoretical convergence results.\n ## Strengths\n\nThe proposed method is elegant, simple to understand, and easy to implement. If the early experimental results hold up, this method can substantially help with neural network training on edge devices. Experimental evaluation looks quite promising.\n\nThe placement of the algorithm in the context of transfer learning on edge devices is appropriate and compelling. Overall, the paper flows well.\n\n## Weaknesses\n\n### [major issue] Experiments [This was addressed to my satisfaction during the discussion period]\nThe authors only show experiments for vision tasks using two architectures (a CNN optimized for mobile performance and ViT). Besides the basic baselines of full finetuning / FC finetuning / FC+Norm, the paper only benchmarks on one other method in the case of CNNs and two methods in the case of ViTs; it is not clear from the paper that these methods are SOTA for each of the tasks considered. In almost all cases, the experiments are presumably run only once, as generally no confidence intervals were given.\n\nThe biggest concern is that the experimental coverage is insufficient to draw strong conclusions. In particular, TinyTL, which is used as a benchmark, provides results for nine datasets of which BackRazor is only tested on four for CNN, and three for ViT. There is also no investigation of how well the method performs with different batch sizes and optimizers, or for other categories of tasks, such as segmentation or language understanding. In the case of CNNs, it would also be useful to see a comparison to other methods that use reduced precision (i.e. quantization); for ViT this was provided.\n\nFinally, it would be useful to see the actual on-device memory usage in the case of the CNN architecture.\n\n### [minor issue] Theory [This was addressed to my satisfaction during the discussion period]\nThe theory gives a convergence result for a linear network, and then some argumentation is made to extend this result to CNNs. However, Theorem 1 states that the algorithm converges to a neighborhood of a stationary point, and the argument that this neighborhood can be reduced by increasing the batch size does not work in this memory-constrained setting. Thus, it is not clear how well the theory applies, even though it is also experimentally verified that the convergence rates are similar, hence this is overall a minor issue.\n\n### [moderate issue] Writing [This was addressed to my satisfaction during the discussion period]\nWhile the flow of the paper is good, important details are often not made as clear as they could be, or are hard to find. For example, the TopK operator used to prune the activations should be clearly indicated in the algorithm. The theoretical assumptions, and the grounds for them are not explained, leaving the reader to consult another paper for context. Finally, the transition in line 129 of moving from an arbitrary activation $z_i$ to the final activation $z_l$ is somewhat confusing, since none of the rest of the paper focuses on the last layer specifically.\n\nSome sentences make overly broad claims, for instance the very first statement of the abstract calls transfer learning the unqualified _de facto_ choice, which is not generally true. Likewise, in line 101, the claim “transfer learning always happens on personal computing devices” is unsubstantiated.\n\nThe proof in the appendix contains numerous typos in the math notation, for instance:\nIn line 4, $ \\tilde{z}’_{k} = \\text{diag}(m)(\\tilde{z}$ should have been $ \\tilde{z}’_{k} = \\text{diag}(m)(\\tilde{z}_{k}$ \t \nIn line 6, $\\tilde{g}_k$ should have been $\\tilde{g}_k$\nIn the last formula of line 5, $\\beta_i$ should have been the boldface $\\beta$. Also, some of the gradients in that equation are incorrectly marked as the full gradient $g$ rather than $\\tilde{g}$.\nThe $\\kappa$s in line 13 should have been $K_a$.\t\n\nOther typos are likewise numerous, although they generally do not adversely affect understanding:\n\nLine 40 of -> or\nLine 99 - missing “the” before “de facto”\nLine 102 constrain -> constraints\nLine 124 by -> of\nLine 130 FLOPs should be capitalized\nLine 157 actions -> activations\nLine 166 value -> values\nLine 179 weight -> weights\nLine 201 Assumption -> assumptions\nLine 231 the word “based” shouldn’t be there\nLine 267 Missing verb after “even”\nLine 288 employ -> employs\nLine 313 illustrate -> illustrates\nLine 329 backpropogation -> backpropagation\n The main suggestion to the authors is to conduct a substantially more thorough experimental validation of the method. The authors correctly pointed out that the paper only looks at image classification tasks, and also addressed the limitations of their theory’s application to ViTs. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 5, 4 ]
[ "WjQdCdi9gZ", "qY5HyiDIwcf", "7STfVHeLky0-", "z7IKMMnMSnn", "7STfVHeLky0-", "-vs-KHuYFG", "LTnSZx1eSMA", "HmK0Kzn1rPz", "Z3_R1DKySMW", "146z1WKCTE6k", "qtKKF3AFMUZ", "Bhr3wSg9HS", "_EtR1RuPIz2", "26d9oNy3aQY", "ECi-8e8JXdW", "nips_2022_mTXQIpXPDbh", "nips_2022_mTXQIpXPDbh", "nips_2022_mTXQIpXPDbh", "nips_2022_mTXQIpXPDbh", "nips_2022_mTXQIpXPDbh" ]
nips_2022__atSgd9Np52
DreamShard: Generalizable Embedding Table Placement for Recommender Systems
We study embedding table placement for distributed recommender systems, which aims to partition and place the tables on multiple hardware devices (e.g., GPUs) to balance the computation and communication costs. Although prior work has explored learning-based approaches for the device placement of computational graphs, embedding table placement remains to be a challenging problem because of 1) the operation fusion of embedding tables, and 2) the generalizability requirement on unseen placement tasks with different numbers of tables and/or devices. To this end, we present DreamShard, a reinforcement learning (RL) approach for embedding table placement. DreamShard achieves the reasoning of operation fusion and generalizability with 1) a cost network to directly predict the costs of the fused operation, and 2) a policy network that is efficiently trained on an estimated Markov decision process (MDP) without real GPU execution, where the states and the rewards are estimated with the cost network. Equipped with sum and max representation reductions, the two networks can directly generalize to any unseen tasks with different numbers of tables and/or devices without fine-tuning. Extensive experiments show that DreamShard substantially outperforms the existing human expert and RNN-based strategies with up to 19% speedup over the strongest baseline on large-scale synthetic tables and our production tables. The code is available.
Accept
The paper proposes DreamShard, a RL-based framework for placing embedding tables across multiple devices in distributed recommender systems. DreamShard jointly trains a cost model (to predict the cost of communication and operator fusion for new configurations) and a policy network to make placement decisions based on the cost model. This two step design makes the algorithm more efficient than naive RL solutions and end-to-end training leads to better generalization than model-based offline strategies. All reviewers agree that the paper is well written and proposes a practical solution to an important problem that is not well studied in the literature. Furthermore, the paper has a strong empirical section that compares DreamShard to strong baselines on open-sourced and production datasets, shows good results and conveys a broad picture of many aspects of their method. Overall this is a very well executed paper proposing an efficient and practical solution to an underexplored problem. I recommend acceptance. For the camera ready the authors should include the new scaling experiments they performed to address the reviewers comments. I would also recommend integrate some of the clarifications regarding the contributions and distinctions from prior work (comments to Reviewer AKvA) in the paper. Also, it might be worth including the greedy baseline numbers for some experiments, just to put the performance into perspective.
train
[ "-wI8DvETKTK", "M2hYyGBXk9Ar", "JH2jc_DxhGs", "LkwejsLEBQk", "0c5aa__OLo8", "dHkTEDZ68e", "40lG2Pjcyuj", "r3k6hKa0HyP", "6jf1tCdm-qP", "i8xvomYq742", "Nww6WIzpPUT" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank all the reviewers for the support and for taking the time to provide all the feedback to help improve the paper. As we are approaching the end of the rebuttal/discussion period, we would like to highlight the contributions of our work and summarize the improvements we have made.\n\n**We have made the following major contributions:**\n- We have studied embedding table placement, an important problem that has been rarely explored in the literature. The prior work mainly focuses on compressing the embedding tables to improve efficiency, which will lead to precision drops. In contrast, embedding table placement is an orthogonal direction that will not impact the precision. \n- We have proposed DreamShard, a general framework for embedding table placement. DreamShard trains a generalizable cost network to predict the costs (i.e., states and rewards in the MDP) and a generalizable policy network to make decisions based on the cost network. DreamShard can solve a family of embedding placement problems instead of a specific placement problem, which enables it to be easily deployed in real-world recommender system workloads.\n- We have performed extensive experiments on both open-sourced and production datasets, showing that DreamShard is effective, generalizable, efficient, and scalable. We will open-source our code to facilitate future studies in this direction.\n\n**We have run the following experiments to address reviewers’ questions or concerns:**\n- We have run the experiment of 8 GPUs on the DLRM dataset, verifying that DreamShard shows consistent advantages (scalability questions raised by Reviewers Lt2a and ypFj).\n- We have extended our approach to an ultra-large production recommendation model with nearly a thousand embedding tables that demand multi-terabyte memory. We have shown that DreamShard can significantly boost the training throughput in a training cluster with 128 GPUs (scalability questions raised by Reviewers Lt2a and ypFj, vocab distribution changes question raised by Reviewers Lt2a).\n- We have run additional ablation studies for the accuracy of the cost model (feature importance question raised by Reviewer ypFj)\n\nWe have also tried our best to answer the questions from all the reviewers (please see the detailed responses below). As we are approaching the end of the discussion period, please let us know if you have further feedback. We are happy to address any questions/concerns in the remainder of the discussion period.\n", " **Does the MDP formulation is usual in the related research?**\n\nOur generalizable MDP formulation of embedding table placement is unusual from two perspectives:\n\nFirst, while MDP formulation has been used in device placements, it has not been used for embedding table placements, which pose many unique challenges, e.g., operation fusion, ultra-large tables with multi-terabyte memory, and balancing both computation and communication overheads. DreamShard tackles these challenges by jointly training a cost network and a policy network to optimize the reward.\n\nSecond, the existing MDP formulation for device placement can not generalize to many unseen scenarios (e.g., they often can only deal with a specific placement problem or at most a similar problem with the same number of devices). In contrast, our formulation can flexibly accommodate different numbers of tables/devices with the generalizable designs of the policy network and the cost network. Unlike previous studies, **DreamShard can solve a family of embedding placement problems instead of a specific placement problem**, which enables it to be easily deployed in real-world recommender system workloads.", " Thank you for the helpful feedback and comments. Please see our response below.\n\n**1. It would be nice to have more results on DLRM with 8GPUs in table 1.**\n\nWe have included 8-GPU results on DLRM in Table 1 (part of 4-GPU results are moved to Appendix due to space limitation). In addition, we have run experiments with 128 GPUs. DreamShard shows consistent improvement in all the settings. **Please see the response to the scalability part above for details**.\n\n**2. Why the author choose 21 table features such as hash size? Can it influence the performance of the model?**\n\nThe features are designed based on domain knowledge. In our experiments, we find each of the features can help improve performance. \n\nTable 3 in the paper has compared DreamShard with the variants with each feature being removed. We find that using all the 21 features can lead to the best load balance.\n\nTo fully address your concern, we further run an ablation study for the accuracy of the cost model on the Prod dataset. We did not use the DLRM dataset here because its tables have the same table dimension, which could make the prediction less dependent on the table dimension. Whereas, the Prod dataset has more diverse table dimensions, which could better reflect the real feature importance. We collect a million samples and split 80%/10%/10% as training/validation/testing sets. We fully train a cost network with 100 epochs and report the MSE on the testing set with each feature being removed. The results are summarized below (also in Appendix J).\n\n|||\n|--- |--- |\n|Features|Testing MSE|\n|w/o dimension|13.746|\n|w/o hash size|0.307|\n|w/o pooling factor|0.635|\n|w/o table size|0.305|\n|w/o distribution features|0.437|\n|All features|**0.303**|\n\nWe observe that all the features contribute to the performance. It is possible that including more features can further improve the performance under our framework. We will investigate this possibility in our future work.\n\n**3. Lack of comparison with related works, e.g., RecShard.**\n\nTo the best of our knowledge, RecShard is the only existing work that is designed for embedding table placement. However, it is very challenging to perform a fair comparison for the following reasons.\n\n- RecShard is not open-sourced. It is quite difficult to implement every detail of RecShard to enable a fair comparison.\n- RecShard has a significantly different experimental setting. For example, they use both GPU and CPU memories to place tables. RecShard will identify the hot rows and put them in GPU memory, and put the rest of the rows in CPU memory. In contrast, we mainly focus on GPU devices. In addition, they have only measured computation time. Whereas, DreamShard aims to balance both computation and communication, which aligns better with real scenarios.\n\nOur work is orthogonal to RecShard since they focus more on multi-level memory hierarchy, while we seek to balance both computation and communication by accurately predicting the costs with neural networks and making placement decisions with RL. **We have already contacted and chatted with the authors of RecShard. We have both agreed that it is non-trivial to make such a comparison.** We have also planned to investigate the possibility of combining DreamShard with RecShard to achieve more improvements. Thus, we will leave this comparison to future work.\n\n\n", " We thank the reviewer for the feedback. We summarize the novelties of our work compared with the existing studies below.\n\n- **Problem novelty.** The embedding table placement problem is an important problem that has been rarely studied before (also pointed out by reviewer Lt2a). Prior work on embedding tables mainly focuses on compressing the tables to improve embedding table efficiency, which will lead to precision drops. In contrast, we explored an orthogonal direction by studying how the embedding tables should be placed, which will not impact the precision. We demonstrated that optimizing the placements can lead to significant efficiency gains on both the open-sourced DLRM dataset and our production dataset, which could motivate future exploration in this direction.\n\n- **Technical novelty.** DreamShard introduces two novel ideas to tackle the embedding table placement problem, i.e., training a generalizable cost network to predict the costs (i.e., states and rewards in the MDP) and a generalizable policy network to make decisions based on the cost network, enabling a general, effective, and efficient framework for embedding table placement. From the technical perspective, DreamShard makes the following major novel contributions in the context of two lines of prior work.\n\n - **Device placement optimization.** The existing device placement techniques can be mainly grouped into RL-based algorithms and cost modeling methods. The former optimizes the placement with trials and errors (**effective but inefficient** since it often requires a large number of trials). The latter builds a cost model to reflect the real performance and adopts offline optimization algorithms (**efficient but less effective** because the cost model could be inaccurate). Whereas, DreamShard connects the efforts of these two research lines in such a way as to jointly train a cost network and a policy network in an end-to-end fashion, being **both effective and efficient**. In the left-hand side of Figure 8, we have shown that introducing the cost network makes the training orders of magnitude more efficient than using RL alone, while being equally effective. In addition, DreamShard can generalize to unseen placement tasks thanks to the cost network, which can not be achieved by prior RL-based placement algorithms. The generalizability of DreamShard makes it **extremely efficient on unseen tasks** since we do not need to re-train the networks as in previous work. In the right-hand side of Figure 8, we have shown that DreamShard can place hundreds of tables for unseen tasks within a second.\n\n - **Reinforcement learning (RL).** While RL has achieved promising results in various domains, one limitation is that RL is often susceptible to overfitting and may fail to generalize to even a slightly different environment. The generalization of RL is an extremely challenging problem that has attracted increasing research attention recently (e.g., see [1] for a survey). Generalizing a policy for embedding table placement problems is a very challenging problem. The policy not only needs to generalize to unseen states (i.e., unseen tables) but also environments with different episode lengths (different numbers of tables) and different action spaces (different numbers of devices). As a comparison, the existing RL generalization work often only focuses on the environments with the same action space [1]. We have demonstrated that by introducing a cost network and applying some representation reduction techniques, the policy network in DreamShard can **successfully generalize to unseen placement tasks with unseen tables and different numbers of tables/devices with neglectable performance drop**. This can be mainly attributed to the generalizability of the cost network, which inherently encourages the generalizability of the policy network by predicting the state features for unseen tasks. We believe this insight itself is a novel contribution to the broad RL community, and the embedding table placement task could serve as a benchmark for RL generalization in future research.\n\nIn the updated paper, we have included additional experiments to show that our novel approach is highly generalizable and scalable. It shows strong performance not only on 2, 4, or 8 GPUs in a single server, but also on a 128 GPU training cluster. **Please see the response to the scalability part above for details**. In addition, our code will be open-sourced to facilitate future research, which is a valuable contribution to both device placement and RL research.\n\n[1] Kirk, Robert, et al. \"A survey of generalisation in deep reinforcement learning.\" arXiv preprint arXiv:2111.09794 (2021).\n", " Thank you for the feedback! We have run additional experiments to address your concerns about the scalability and vocab distribution changes.\n\n**1. Could the method scale to larger settings? Currently 4 gpus are being used.**\n\nWe have added 8-GPU experiments on the DLRM dataset. In addition, we have extended DreamShard to an ultra-large production recommendation model with nearly a thousand embedding tables that demand multi-terabyte memory, which is trained using a training cluster with 128 GPUs. **Please see the response to the scalability part above for details.** DreamShard shows consistent advantages over the baselines. It also leads to significant improvements in training throughput on our production recommendation models.\n\n**2. In a real world setting, the vocab distribution constantly changes (e.g. new popular items) which may affect the load balance. Could the method adapt to this dynamic setting?**\n\nTo test whether DreamShard can accommodate vocab distribution changes, in our production model, we train DreamShard on the data collected from an earlier date and apply it to the data collected one month later. We find that DreamShard shows significant improvements over the baselines even when the vocab distribution changes. **Please see the response to the scalability part above for details.** We note that we are not able to do such experiments on the DLRM dataset since it does not provide the data on different dates.", " We thank the reviewer for the comments and feedback! Please see the response to your questions below.\n\n**1. Can a table be divided and be placed on > 1 devices in this method? Are there scenarios where a table does not fit in a single device?**\n\nIt is possible that some tables can be extremely large and can not fit in the memory of a single GPU. In this case, we will split the over-sized tables (e.g., splitting them column-wise or row-wise). However, splitting tables will often introduce extra costs due to the batching of the embedding operation. For example, suppose there is a table with a dimension of 512 and an operation kernel time of 20 milliseconds (ms). If we split it in half column-wise, then each table will have a dimension of 256. However, the operation kernel time of each table after splitting will often be much larger than 20 / 2 = 10 ms (e.g.15 ms could be a possible value). Thus, the sum of the costs of the two tables will be larger than 20 ms, which introduces extra costs. This is because the operation can do better optimization with batching before splitting.\n\nThus, in real cases, we often do minimal splitting (or pre-sharding). That is, we only split the tables when the tables can not fit in a single device. Then we apply different placement algorithms to the pre-sharded tables. As an example, we have extended DreamShard to a production recommendation model, where pre-sharding is performed before DreamShard is applied (**please see the response to the scalability part above for more details**).\n\nTable splitting is another interesting direction that is orthogonal to our work. It is a very challenging problem that requires a tradeoff between load balance (splitting tables to smaller pieces makes it easier to achieve load balance) and minimizing total cost (performing too many splitting steps may lead to significantly more overall costs). We will investigate table splitting strategy (e.g., which tables to split, and whether to perform column-wise, row-wise, or hybrid splitting) in our future work.\n\n\n**2. Can the authors also discuss results comparing a simple greedy based approach?**\n\nGreedy-based approaches simply assign the current table to the device with the lowest cost so far. They are sub-optimal because 1) locally optimal decisions may not necessarily lead to globally optimal solutions, 2) The embedding table placement problem has lots of complicating factors, such as balancing both computation and communication, and reasoning about operation fusion. These are not considered in greedy approaches.\n\nIn contrast, DreamShard takes all the table features as inputs and makes the placement decisions in a data-driven manner with RL to optimize the reward (i.e., the total cost), which leads to better solutions than greedy approaches.\n\n**3. Is the shared feature extraction MLP and sum reduction in the cost network and policy network sharing weights?**\n\nWe used separated weights in our experiments. We will study whether sharing weights is helpful in our future work.\n\n**4. Given the placement of the tables is decided by the RL model and can vary across samples during training, how is operator fusion carried out? Is there a compiler pass after placement to determine what operations on a device can be fused?**\n\nIn each training iteration, we will reconstruct the embedding operation based on the decisions made by RL. There is no compiler pass after placement, but instead, all the tables that are assigned in a device will be fused with [FBGEMM](https://github.com/pytorch/FBGEMM/tree/main/fbgemm_gpu). That is, RL will decide which operations will be fused together.\n", " We thank all the reviewers for the constructive comments and feedback. A common question raised by the reviewers is whether DreamShard can scale to more GPUs (Reviewers Lt2a and ypFj) and whether it can generalize if vocab distribution changes (Reviewer Lt2a). We have added the following two experiments to answer this question.\n\n**1. 8 GPUs on the DLRM dataset (updated in Table 1).**\n\nWe summarize the results in the table below. DreamShard shows consistent advantages over the baselines. We have updated Table 1 with the 8-GPU results and moved half of the 4-GPU results to Appendix F. Note that we used V100 GPUs for the 8-GPU experiments rather than 2080-Ti GPUs used in the 4-GPU experiments since we do not have an 8 2080-Ti GPU server at hand.\n\n|||||||||\n|--- |--- |--- |--- |--- |--- |--- |--- |\n|Task|Random|Size-based|Dim-based|Lookup-based|Size-lookup-based|RNN-based|DreamShard|\n|DLRM-40 (8) train|15.6±0.4|14.1±0.0 (+10.6%)|13.4±0.1 (+16.4%)|**9.8±0.0 (+59.2%)**|9.9±0.0 (+57.6%)|16.2±0.8 (-3.7%)|**9.8±0.6 (+59.2%)**|\n|DLRM-40 (8) test|15.2±0.2|14.5±0.0 (+4.8%)|13.2±0.0 (+15.2%)|9.5±0.0 (+60.0%)|9.5±0.0 (+60.0%)|16.0±1.1 (-5.0%)|**9.4±0.5 (+61.7%)**|\n|DLRM-80 (8) train|25.0±0.2|24.0±0.0 (+4.2%)|21.7±0.0 (+15.2%)|17.1±0.0 (+46.2%)|17.5±0.0 (+42.9%)|51.4±3.9 (-51.4%)|**16.1±0.3 (+55.3%)**|\n|DLRM-80 (8) test|25.2±1.3|25.6±0.5 (-1.6%)|20.8±0.0 (+21.2%)|16.7±0.2 (+50.9%)|16.9±0.1 (+49.1%)|53.4±4.6 (-52.8%)|**16.1±0.4 (+56.5%)**|\n|DLRM-120 (8) train|34.0±0.3|32.3±0.0 (+5.3%)|29.8±0.0 (+14.1%)|24.5±0.0 (+38.8%)|25.3±0.0 (+34.4%)|58.6±2.7 (-42.0%)|**23.3±0.2 (+45.9%)**|\n|DLRM-120 (8) test|33.5±0.5|35.0±0.0 (-4.3%)|29.2±0.0 (+14.7%)|23.7±0.0 (+41.4%)|24.5±0.0 (+36.7%)|58.7±3.1 (-42.9%)|**22.8±0.2 (+46.9%)**|\n|DLRM-160 (8) train|42.8±0.3|41.6±0.0 (+2.9%)|39.0±0.0 (+9.7%)|32.0±0.0 (+33.7%)|32.7±0.0 (+30.9%)|58.3±3.5 (-26.6%)|**30.3±0.2 (+41.3%)**|\n|DLRM-160 (8) test|41.1±0.0|42.4±0.0 (-3.1%)|36.4±0.0 (+12.9%)|30.8±0.0 (+33.4%)|31.6±0.0 (+30.1%)|59.3±5.4 (-30.7%)|**29.6±0.2 (+38.9%)**|\n|DLRM-200 (8) train|51.5±1.2|48.2±0.0 (+6.8%)|48.0±0.0 (+7.3%)|38.9±0.0 (+32.4%)|39.9±0.0 (+29.1%)|68.7±2.4 (-25.0%)|**37.2±0.2 (+38.4%)**|\n|DLRM-200 (8) test|50.7±0.2|50.8±0.0 (-0.2%)|44.8±0.0 (+13.2%)|38.0±0.0 (+33.4%)|38.6±0.0 (+31.3%)|70.4±2.8 (-28.0%)|**36.4±0.3 (+39.3%)**|\n\n\n**2. 128 GPUs on the production recommendation model (updated in Appendix K).**\n\nTo further test the scalability, we extend DreamShard to an **ultra-large production recommendation model with nearly a thousand embedding tables that demand multi-terabyte memory**. Since some tables are too large and can not fit in a single GPU, we perform a pre-sharding step by splitting the large tables in half column-wise. We test all the placement algorithms using a training cluster with 128 GPUs (the RNN-based method is excluded because we find it is very unstable and can not deliver a reasonable performance). In addition to the embedding cost, we also report the overall training throughput, which includes embedding cost, dense computation, data loading, etc. To answer Reviewer ypFj’s question about vocab distribution changes, we purposely train the networks on the data from a previous date and apply the pre-trained model on the data collected one month later (e.g., we train DreamShard on the data collected in January, and apply it to the data collected in February so that the vocab distribution could have significant changes). We summarize the embedding costs and relative training throughput improvements over random placement in the table below. Note that we have only performed a single run for each of the training throughput experiments due to the difficulty of resource scheduling.\n\n||||\n|--- |--- |--- |\n|Placement Algorithm|Embedding Cost|Training throughput improvement|\n|Random|118.37|0.00%|\n|Size-based|107.63 (+10.0%)|+4.0%|\n|Dim-based|90.83 (+30.3%)|+13.9%|\n|Lookup-based|102.44 (+15.6%)|+11.9%|\n|Size-lookup-based|109.27 (+8.3%)|+12.8%|\n|DreamShard|**61.59 (+92.2%)**|**+45.3%**|\n\n\nFor the embedding cost, we observe DreamShard is **67.8% better than the strongest baseline**. For the training throughput, DreamShard shows **27.6% improvement over the strongest baseline**. Note that since the production model has already been optimized with many iterations, a 5% improvement of training throughput is considered significant. \n\n", " This paper studies the problem of placing embedding tables of a recommender system model in a distributed training environment. How the embedding tables are placed can have significant impact on latency. Embedding lookup consists of 4 stages - In the forward pass, the embeddings are calculated and communicated of dense vector to target devices, while in the backward pass, the gradients are communicated to the device with the embedding table and the gradients are then calculated for the embedding table. The tables can easily lead to imbalances if not placed correctly and thus longer latencies during training. Thus, given a set of embedding tables and devices, identifying the best partitioning strategy is important to minimize costs. However, device placement is a NP-hard combinatorial problem. Further, operation fusion and generalizability requirement further complicate the problem.\n\nThis paper proposes DreamShard, a Reinforcement Learning based approach for embedding table placement. DreamShard solves the problem of operation fusion and generalizability using two key ideas - (1) Learning a cost network to directly predict the cost of fused networks (2) Training a policy network that interacts with an estimated Markov decision process (MDP) without real GPU execution. Strengths \n\n1. The results of the paper are competitive with recent state of the art methods and evaluated on strong baselines\n2. The idea to capture cost of communication and operator fusion using a learned network to generalize to unseen tables, different number of devices etc can help provide a practical solution to this problem \n\n\nWeakness -\nNone that I can think of. Questions -\n\n1. Can a table be divided and be placed on > 1 devices in this method? Are there scenarios where a table does not fit in a single device?\n2. Can the authors also discuss results comparing a simple greedy based approach? \n3. Is the shared feature extraction MLP and sum reduction in the cost network and policy network sharing weights?\n4. Given the placement of the tables is decided by the RL model and can vary across samples during training, how is operator fusion carried out? Is there a compiler pass after placement to determine what operations on a device can be fused?\n Have the authors adequately addressed the limitations and potential negative societal impact of their work? Yes", " The paper proposes a RL-based approach for embedding table placement on multiple devices (mostly GPUs), which outperforms human expert and RNN-based baselines. Strengths\n* The problem is important and rarely exploreded before\n* The proposed method seems reasonable to me, with lots of reasonable heuristic baselines\n* The writing is clear and the paper is easy to follow * Could the method scale to larger settings? Currently 4 gpus are being used.\n* In a real world setting, the vocab distribution constantly changes (e.g. new popular items) which may affect the load balance. Could the method adapt to this dynamic setting?\n \n \n no.", " This paper focuses on the embedding table placement problem for distributed recommender systems, where the the operation fusion and the generalizability is important and challenging. For solving the problem, the paper formulates the table placement process as an MDP, and proposes an RL framework named as DreamShard which consists of a cost network and a policy network. Extensive experiments show that DreamShard performs well.\n Strengths:\n1. The paper is well-written and easy to follow.\n2. The embedding table placement problem is practical.\n\nWeaknesses:\n1. The introduction of the related work is not enough. I am not sure about the novelty of this paper. 1. Does the MDP formulation is usual in the related research? I have no other specific suggestions about the limitations.", " This paper presents DreamShard for embedding table placement in recommender systems to balance the computation and communication costs. The authors formulate the table placement process as an MDP and train a cost network to estimate its states and rewards. Then they update the policy network by interacting with the estimated MDP. The results show speedup over the existing algorithms and good generalizability.\n ### Strengths:\n1. The paper is clearly written and very structural. The idea of formulating the table placement process as a MDP is interesting.\n2. This paper considers the different speedups across different table combinations and proposes a generalizable embedding table placement problem.\n3. This paper addresses the embedding table placement problem with two networks, achieving good performance in speedup and generalization. The experiments are well-designed and thorough.\n\n### Weaknesses:\n1. It would be nice to have more results on DLRM with 8GPUs in table 1.\n2. Why the author choose 21 table features such as hash size? Can it influence the performance of the model?\n3. Lack of comparison with related works, e.g., RecShard. Pleae see comments above. The authors have discussed the limitations and potential negative societal impact of their work. " ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 2, 2, 4 ]
[ "nips_2022__atSgd9Np52", "LkwejsLEBQk", "Nww6WIzpPUT", "i8xvomYq742", "6jf1tCdm-qP", "r3k6hKa0HyP", "nips_2022__atSgd9Np52", "nips_2022__atSgd9Np52", "nips_2022__atSgd9Np52", "nips_2022__atSgd9Np52", "nips_2022__atSgd9Np52" ]
nips_2022_YBsLfudKlBu
Learning Viewpoint-Agnostic Visual Representations by Recovering Tokens in 3D Space
Humans are remarkably flexible in understanding viewpoint changes due to visual cortex supporting the perception of 3D structure. In contrast, most of the computer vision models that learn visual representation from a pool of 2D images often fail to generalize over novel camera viewpoints. Recently, the vision architectures have shifted towards convolution-free architectures, visual Transformers, which operate on tokens derived from image patches. However, these Transformers do not perform explicit operations to learn viewpoint-agnostic representation for visual understanding. To this end, we propose a 3D Token Representation Layer (3DTRL) that estimates the 3D positional information of the visual tokens and leverages it for learning viewpoint-agnostic representations. The key elements of 3DTRL include a pseudo-depth estimator and a learned camera matrix to impose geometric transformations on the tokens, trained in an unsupervised fashion. These enable 3DTRL to recover the 3D positional information of the tokens from 2D patches. In practice, 3DTRL is easily plugged-in into a Transformer. Our experiments demonstrate the effectiveness of 3DTRL in many vision tasks including image classification, multi-view video alignment, and action recognition. The models with 3DTRL outperform their backbone Transformers in all the tasks with minimal added computation. Our code is available at https://github.com/elicassion/3DTRL.
Accept
This paper presents a method for transformers to upgrade the 2D image input to pseudo-3D. It proposes a neural layer that estimates per-token depth and also a camera pose (pitch yaw roll), then unproject token coordinates to 3D, encodes these coordinates into embeddings, then adds these with the existing embeddings, and proceeds with the rest of the transformer. This gives performance boosts on a variety of tasks, such as video alignment. The reviewers raised concerns regarding the depth maps inferred looking more like saliency maps, but also, the depth scale ambiguity, and the ambiguity of absolute camera pose inference. The rebuttal submitted by the authors included additional results that showed that the inferred depth maps and camera poses correlated with the correct ones. All reviewers appreciated the additional experiments contributed by the authors, and suggested them to be included to the main paper.
train
[ "aV6Xg9md4fZ", "gd5W4Q64-Pw", "cFUJ53HnJeY", "gYs7cQg8-YO", "HfCGVatnN", "qGg8ogvRCTR", "6BL7p-Zg8up", "Osj9_nN5yVI", "qRSadUSQRR_", "5vEranXF7JV", "dShtErTbYB", "5hb5_bsZums", "Y3SlDoA5icz", "CW4D54SQGou", "7GNcLa-FcpF", "gEWa_SEgOZN", "Sq_ugIhySwe", "6Z2HwyZhJM9a", "KEOZmmZyIj4", "DVSld8rAqGZ", "nrW0xbfFwZw", "ZvUcLa9Lu2Q", "iAONypffKBW", "pBeJiy5VXoR", "401wiYJa0bV" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the suggestions. We have some updates and please check the 10-page version in the updated supplementary to see our changes. \n\n> Add object class on top for Figure 8\n - We have added class labels in the updated Figure 8.\n\n> Epochs\n - We apologize for the confusion, they are from different training datasets (Figure 6: IN-1K; Figure 7: Can dataset). \n\n> Pseudo-depth map at different stages\n - We have attached pseudo-depth estimation results at epoch 10, 40, 200 and 300 of training on IN-1K, in **Section K, Figure 15** (the last page of the updated supplementary). We find that the estimation varies significantly from epoch 10 to epoch 40 (higher foreground-background correctness, less missing parts of objects), but changes only a bit from epoch 40 to epoch 200 and finally to epoch 300 (mostly scales). This observation is also coherent with our quantitative evaluation. Thus, the pseudo-depth estimation learns promptly, however the model convergence takes longer time since we are optimizing for a downstream task (eg. classification).", " Dear Authors,\n\nThanks for a reminder. I think 10 page version looks good. One suggestion is for Fig.8, please add object class on top. \n\nQuestion about Fig. 6 and Fig. 7, I see in figure 6 x-axis goes till epoch 300, but in Fig. 7 only till epoch 70. Is there a reason for this? Ideally both figures should have same x-axis. \n\nAnd since, I see 3D information is being learned pretty early on in the training process, if possible it would be good to add depth-map visualization at different training stages for a few images in the appendix. I am wondering if depth is changing at all afterwards.", " It will be great if the reviewer can please check the 10-page final version of the main paper we attached at the end of the newly updated supplementary (from **Page 13**). This version reflects the suggested paper organization by the reviewers and we would be happy to get further comments.\n\nPlease let us know if we have successfully addressed your concerns. If so, we will be very pleased if it can be reflected in your final review.", " Thank you for the detailed response and the additional experimental results, which address the majority of my concerns. Although I'm not entirely satisfied with the 3D prediction results, I acknowledge that this is a very hard task of inferring the 3D in an unsupervised way. Given the experimental results that show clear improvements in the downstream task and the additional more detailed analysis, I'm willing to increase my score to 5.", " As the reviewers suggested, we have attempted a replica of 10-page final version at the end of the Appendix (beginning from Page 13 in the newly updated supplementary) showing our reorganization. We would appreaciate any comments from you about the reorganization. ", " We thank you for the review. We tried our best to respond to the comments, including the new experiments on comparing the estimated 3D information against ground truth. It will be great if you can please check it and let us know whether we have addressed your concerns. If there is something still missing, we would be happy to add it.", " We thank the reviewer for recognizing the significance of new experiments. We totally agree that important evaluations should appear in the main paper, and we will do so in the camera-ready version by utilizing an additional content page. We have attempted a replica of this 10-page camera-ready version at the end of the Appendix (beginning from Page 13 in the newly updated supplementary) showing our reorganization.", " We thank the reviewer for sharing two solid papers regarding rotation representations in neural networks. As mentioned, it was not a big problem in our case as the approach probably learned to avoid putting R into nearly boundary cases. We will add the discussions in the final version of the paper.", " Thank you for your updates! I'm quite impressed by the updated experiments (especially the evaluation of depth and camera pose). That's a lot of efforts. In general, I'm convinced by additional experiments. I'm happy to raise my rating from 6 to 7.\n\nJust some suggestions: The camera pose evaluation can be done using accuracy/AUROC/median angle error. That might be more straightforward to interrupt. [1] and [2] might be inspiring in light of their evaluation of camera poses.\n\n[1] Sarlin et al. SuperGlue: Learning Feature Matching with Graph Neural Networks. CVPR 2020.\n[2] Jin et al. Planar Surface Reconstruction from Sparse Views. ICCV 2021.\n\nDepth also has scale-invariant RMSE but I'm not sure if you can get any meaningful results here since it focus a lot on details. We don't expect the implicit learning of depth gets most details correctly here.\n\nAnyway, thanks for your great work!", " Thank you for the updates and answers!\n\nThis is promising, and I would like to raise the concern that asking NNs to generate euler angles will induce wierd behavior[1][2], though not really a concern of this paper.\n\n[1] On the Continuity of Rotation Representations in Neural Networks, CVPR 2019\n[2] Eliminating topological errors in neural network rotation estimation using self-selecting ensembles, TOG 2021", " Authors have provided detailed response to all of my concerns. I saw that many reviewers raised similar concerns and these are important pieces of information for the paper to be accepted. As reviewer xNss mentioned, the common concerns across all reviewers should me moved to the main paper, I understand this is a bit of work towards the rebuttal, but this can make paper really strong. Especially, depth correlation analysis and showing improvement on different transformer backbones. \n\nThank You.", " We thank the reviewer for the suggestion on improving the paper organization. We will move suggested content from the supplementary to the main paper accordingly in the final version.", " Great, these additions answer my questions and improve the paper. It seems other reviewers shared a similar concern, about getting a better intuition of why this works, and I think the additional visualizations and analysis are a great help on this matter. \n\nThis will probably take a big effort, but: I think the main paper could be revised to do a better job of addressing concerns as they pop up in the reader's mind. Shifting a few good plots and visualizations from the supplementary into the main, while making the text more concise, would make the paper much easier to like. ", " We thank all the reviewers for the thoughtful comments. As reviewers suggested, we conducted new experiments and included more examples in the **updated supplementary material**. Here is the summary.\n1. [Section **A.1**] Quantitative evaluation on pseudo-depth estimation with ground truth depth maps. \n2. [Section **A.2**] Quantitative and qualitative evaluations on camera estimation with ground truth camera extrinsics.\n3. [Section **B**] Evaluation on image classification using ObjectNet, a dataset including hard, real-world image samples in different rotations and viewpoints.\n4. [Section **C**] Evaluation on 3DTRL with more Transformer architectures.\n5. [Section **D**] Comparison between naive perspective augmentation and 3DTRL.\n6. [Section **F**] Examples for pseudo-depth estimation on non-class objects.\n7. [Section **H**] Limitation discussion.\n8. [Section **M**] A large collection of pseudo-depth map examples.", " We thank the reviewer for the encouragement and the insightful comments.\n\n- > Comparison against the ground truth\n * We thank the reviewer for the suggestion. Following this, we conducted an additional experiment quantitatively evaluating the 3D information accuracy (**Section A** in the updated supplementary material). We used the dataset with the ground truth 3D information, and measured the correlation between the ground truth and our estimates. We confirm high correlation between them.\n\n- > Ambiguities in estimation \n * We agree with the reviewer that there is scale ambiguity. As our method does not utilize any supervised 3D information, instead of directly resolving the ambiguity, we optimize the entire model including 3DTRL with respect to the downstream task (e.g., object classification). The intuition is that 3DTRL has to learn to set the pseudo-depth (and the corresponding camera parameters) at the right scale, in order to make the entire task (e.g., object recognition) successful. The resulting 3D estimation will not be metric (as it is not explicitly optimized to recover the ground truth 3D) but have its own scale, and we find that this is ok as long as it is consistent across different images.\n\n- > 3D inference capability\n * We conducted an additional experiment to explicitly evaluate the correspondence between the estimated pseudo-depth and the ground truth depth (**Section A.1** in the updated supplementary material), and confirmed that they are correlated -- the relative relations between the estimated 3D positions match with those of the ground truth. This was reported in terms of Person’s r. Overall our estimation gives $r=0.7$ which shows a good correlation. We believe such 3D information is sufficient to contribute meaningfully as a positional embedding.\n\n- > Intrinsics\n * Our work focuses on the approximation. We tried different intrinsics and it did not make much difference.\n\n- > Origin\n * We set the origin to be $(0,0,0)$, and we can interpret as the camera poses with respect to this origin are implicitly learned in an unsupervised way. In **Figure 10** in the updated supplementary, we show the same object in similar object poses are predicted to have similar camera poses. We believe our estimated extrinsics are doing object-centric canonicalization of images with respect to their object poses. This aligns with the Reviewer **xnSs**’s insight: \"I think we expect the poses (extrinsics) are doing some canonicalization of the input imagery -- registering them closer to a common pose.\"\n\n- > Different number of 3DTRLs\n * We did not find a conclusion on how the estimation changes across multiple 3DTRLs because this kind of usage is not our main study, considering the improvement from using multiple 3DTRLs is less than the improvement we get from baseline->one 3DTRL, but the parameter and computation overhead are doubled/tripled.\n", " - > Experiment with more Transformer architectures\n * In **Section 4.5** of the submitted paper, we showed the results with TimeSformer, which invokes a different input (video) and different attention mechanism (divided space-and-time) than DeiT.\n * In order to further confirm this, following the suggestion from the reviewer, we newly conducted experiments by inserting 3DTRL with different architectures like Swin [A2] and TnT [A3]. The table below shows the results. We find that the improvement in Swin is relatively smaller compared to the other Transformers, due to the strong inductive bias (local windows) that limits the interaction among tokens. Still, the result is consistent across different architectures.\n\t| Model | CIFAR-10 | CIFAR-100 | Pouring (Kendall’s Tau) | Pick(Kendall’s Tau) |\n\t|----------------|------------------|------------------|-------------------------|---------------------|\n\t| Swin-T | 50.11 | 21.53 | 0.584 | 0.623 |\n\t| Swin-T + 3DTRL | **50.29(+0.18)** | **21.55(+0.02)** | **0.683(+0.099)** | **0.640(+0.017)** |\n\t||\n\t| TnT-S | 81.25 | 54.07 | 0.740 | 0.640 |\n\t| TnT-S + 3DTRL | **82.43(+1.18)** | **56.00(+1.93)** | **0.792(+0.052)** | **0.671(+0.031)** |\n\n- > Experiment with perspective augmentation\n * Following the suggestion, we applied perspective augmentation during the training, focusing on the multi-view video alignment task. We find that it does not help the multi-view video alignment. Instead, it harms the training since these augmentation transforms are not true viewpoint changes. For simplicity we show one metric Kendall’s tau (the higher the better) in the table below. \n\n\t| | Pouring | Pick | MC | Can | Lift |\n\t|----------------------|---------|--------|--------|-------|-------|\n\t| DeiT | 0.426 | 0.245 | -0.115 | 0.789 | 0.716 |\n\t| DeiT+Perspective Aug | 0.201 | -0.249 | -0.419 | 0.342 | 0.486 |\n\t| DeiT+3DTRL | 0.740 | 0.635 | 0.392 | 0.824 | 0.739 |\n\n\n- > Eq(2) \n * Good catch. Sorry for the confusion. (u, v) in eq (2) are center pixel coordinates, which are set to (0,0). We have fixed it in the revised version of the paper.\n\n- > Eq(3)\n * Sorry for the confusion. Both versions are mathematically correct and the difference is the definition of $d$. If $d$ is the distance between the point and the optical center, then $z=dc/\\sqrt{u^2 + v^2 + c^2}$. We missed $c$ in the nominator and $c^2$ (originally written in $c$) in the denominator, but our results are not affected since $c$ is set to 1 in our configuration. We have fixed this equation using the standard way.\n\n[A2] Liu, Ze, et al. \"Swin transformer: Hierarchical vision transformer using shifted windows.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n\n[A3] Han, Kai, et al. \"Transformer in transformer.\" Advances in Neural Information Processing Systems 34 (2021): 15908-15919.", " We appreciate the reviewer for the thoughtful comments and contrusctive suggestions.\n\n- > Camera estimation \n * We believe our estimated extrinsics are doing object-centric canonicalization of images with respect to their object poses. This aligns with the Reviewer **xnSs**’s insight: \"I think we expect the poses (extrinsics) are doing some canonicalization of the input imagery -- registering them closer to a common pose.\"\n Our new visualization (**Figure 10** in the updated supplementary material) also implicitly illustrates this by showing that the images with the same/similar object pose (in different environments) result in similar camera extrinsics regardless of the background.\n\n- > Limitation on ImageNet-perturbed Evaluation\n * We agree that perspective transformation on ImageNet is not exactly true viewpoint change, and it is more a proof-of-concept. Simultaneously, the paper also has experimental results on real-world (1) multi-view video alignment and (2) cross-view action recognition tasks. Our approach shows meaningful improvements in these tasks which was also mentioned by Reviewers **xnSs, FjVH, sPDR, and kBS4**.\n * Following the suggestion from the reviewers, we further tested the model performance on ObjectNet [A1], which is a very challenging real-world testset including viewpoint and other distracting changes. Our new experiments show that our method consistently and meaningfully outperforms its corresponding baseline model. Note that we are using the ImageNet-trained model without any fine-tuning for ObjectNet.\n\t| Model | ObjectNet |\n\t|--------------|-------------------|\n\t| DeiT-T | 21.30 |\n\t| DeiT-T+3DTRL | **22.37 (+1.07)** |\n\t||\n\t| DeiT-S | 25.83 |\n\t| DeiT-S+3DTRL | **27.08 (+1.25)** |\n\t||\n\t| DeiT-B | 26.98 |\n\t| DeiT-B+3DTRL | **27.34 (+0.36)** |\n\t||\n\t| Swin-T | 28.60 |\n\t| Swin-T+3DTRL | **28.95 (+0.35)** |\n\t||\n\t| Swin-S | 30.85 |\n\t| Swin-S+3DTRL | **31.26 (+0.41)** |\n\n- > Generic module\n * We would like to clarify. What we mean is that 3DTRL could be inserted within the model for different downstream tasks that would benefit from 3D, as long as we have training data for these downstream tasks. 3DTRL will be optimized for the given dataset. Our new experiments also show the potential that the learned 3DTRL could transfer from ImageNet to ObjectNet without any finetuning, and we will investigate this further in the final version of the paper. \n\n- > Focus on primary parts of the object in pseudo-depth estimation\n * In order to clarify this further, we included additional depth map visualizations in the updated supplementary material. **Figure 13** shows more examples with multiple objects in the scene (i.e., we have other objects in the scene). In these examples, we observe that foreground objects that do not correspond to the object class label also provide reasonable pseudo-depth values.\n * We also conducted an additional experiment quantitatively measuring the accuracy of the 3D depth estimation with the dataset where the ground truth 3D surfaces are known (**Figure 8** in the updated supplementary material). The result shows that our pseudo-depth estimation is highly correlated with the ground truth depth maps.\n\n[A1] Barbu, Andrei, et al. \"Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models.\" Advances in neural information processing systems 32 (2019).", " We thank the reviewer for the valuable comments.\n- > Depth/Camera pose evaluation\n * Following the suggestion from the reviewers, we conducted additional experiments to quantitatively evaluate the predicted 3D locations of the features as well as the camera poses. **Section A** in Appendix (in the updated supplementary material) shows the results. The accuracy of the predicted depth map in each image was evaluated by measuring the correlation to the ground truth. The resulting correlation coefficient $r$ is ~0.7, showing that we have a good mapping.\n * In evaluation on camera poses, we test the disparity between the estimation and the ground truth. We individually check the disparity of position and orientation, and results (both <0.5) show that our estimation is fairly correlated to the ground truth. \n- > Gimbal lock\n * Thanks for the great question. We agree that gimbal lock is an important issue in 3D rotations. However, in our case, the Euler-angles are predicted by neural nets so that the gimbal lock case occurs rarely compared to the real continuous controlling process.\n- > Optimization over the course of training\n * We conducted additional experiments to further investigate these. Please check **Section A** (**Figure 8** and **9**) in the updated supplementary for the performance curve.", " We thank the reviewer for the valuable comments.\n- > Scale ambiguity\n * We agree with the reviewer that there is scale ambiguity. We do not enforce any scale-invariance, but optimize the entire model including 3DTRL with respect to the downstream task (e.g., object classification). The intuition is that 3DTRL has to learn to set the pseudo-depth at the right scale, in order to make the entire task (e.g., object recognition) successful. The resulting 3D estimation will not be metric (as it is not explicitly optimized to recover the ground truth 3D) but have its own scale, and we find that this is ok as long as it is consistent across different images.\n * We conducted a new experiment evaluating the correlation between the estimated pseudo-depth and the ground truth depth (**Section A.1** in the updated supplementary material), and confirmed that they are correlated (r is ~0.7).\n\n- > Ambiguity in camera poses\n * As we described above, we believe the 3DTRL is optimized to learn camera parameters setting the pseudo-depth at the right scale to make the task successful. In the new experiment evaluating the camera estimation (**Section A.2**), we show a fair alignment (disparity < 0.5) between our camera estimation and the ground truth.\n\n- > Limitation on ImageNet-perturbed evaluation\n * We agree that the perspective transformation may not represent reality. The perturbed ImageNet is a good proof-of-concept, and we also evaluated 3DTRL on true multi-view scenarios such as video alignment and action recognition (Section 4.2, 4.4 in the main paper).\n * We also conducted one extra experiment with ObjectNet [A1], a very challenging test set including viewpoint and other distracting changes, compared to ImageNet. We show that our method consistently outperforms its corresponding baseline model, including the additional Swin Transformer backbone.\n\t| Model | ObjectNet |\n\t|--------------|-------------------|\n\t| DeiT-T | 21.30 |\n\t| DeiT-T+3DTRL | **22.37 (+1.07)** |\n\t||\n\t| DeiT-S | 25.83 |\n\t| DeiT-S+3DTRL | **27.08 (+1.25)** |\n\t||\n\t| DeiT-B | 26.98 |\n\t| DeiT-B+3DTRL | **27.34 (+0.36)** |\n\t||\n\t| Swin-T | 28.60 |\n\t| Swin-T+3DTRL | **28.95 (+0.35)** |\n\t||\n\t| Swin-S | 30.85 |\n\t| Swin-S+3DTRL | **31.26 (+0.41)** |\n\n- > Evaluation on the estimated cameras\n * We conducted additional experiments. Please refer to **Section A.2** in our updated supplementary, where we quantitatively and qualitatively evaluate the estimated extrinsics. We show fair disparities (<0.5) in both position and orientation measurements, indicating that our estimation has a fair correspondence to the ground truth.\n\n- > Failure cases\n * We find it hard to estimate small objects in the scene, or complex scene, due to the coarse scale (in 16x16 image patches) from the backbone Transformer. More discussion is in the Limitation section in the updated supplementary.\n\n- > Focal length c\n * We set $c=1$ for simplicity. Given by eq.(3), $d$ and $c$ are correlated so estimating $d$ and when $c$ is fixed is enough. For different datasets, we don’t change $c$, i.e. they are all set to $1$. \n\n- > Ground truth camera parameters for training\n * Our method does not use any ground truth intrinsics and extrinsics for training.\n\n\n[A1] Barbu, Andrei, et al. \"Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models.\" Advances in neural information processing systems 32 (2019).", " We thank the reviewer for the encouragement and the insightful comments.\n- > More (depth map) visualizations\n * Following the suggestion, we added more depth map examples in **Figure 14** in the updated supplementary material. We show examples of multiple object classes.\n\n- > \"I think we expect the poses (extrinsics) are doing some canonicalization of the input imagery -- registering them closer to a common pose.\"\n * We agree with the reviewer. In order to confirm this further, we conducted a qualitative analysis by taking images of the same object with different viewpoints (and different background), using our ImageNet trained model. **Figure 10** in the updated supplementary material illustrates the results. We are able to observe that the images containing similar object poses result in similar camera poses, implicitly suggesting that the estimated extrinsics canonicalise the object pose.\n\n- > 3D estimation and camera pose analysis\n * We also added quantitative evaluation regarding 3D estimation (pseudo-depth) and camera pose estimation in **Section A** in the updated supplementary. \n * As our estimations and the ground truth do not share the same coordinate system (and as our model only optimizes with respect to the entire task, eg. classification), we measure the relative mapping between estimation and ground truth.\n * We showed that the correlation between our pseudo-depth and ground truth is high (~0.7). The disparity between estimated camera poses and ground truth camera poses is fair (disparity < 0.5). These suggest 3DTRL is able to relatively recover 3D information.\n\n- > Question: \"When you use multiple 3DTRL locations (in the supp), do you share the depth and pose estimates across all of them?\"\n * The depth and pose estimates are not shared.", " This paper presents a method for transformers to upgrade their 2D image inputs to pseudo-3D, and shows quite convincingly that this contributes a performance boost. The basic idea is a layer that goes somewhere in the middle of the transformer: estimate per-token depth and also a camera pose (pitch yaw roll), then unproject coordinates to 3D, encode these coordinates into embeddings, then add these with the existing embeddings, and proceed with the rest of the transformer. This gives performance boosts on a variety of tasks. \n I am quite amazed by this paper. The idea is simple and intuitive, it adds negligible computation, it requires no extra supervision, and yet it improves results by a few points on a variety of tasks. The ablation studies indicate that it is indeed the 3d lifting, and not the additional parameters, that leads to the performance benefit. I think this is a significant result and worth publishing. Also the paper is easy to read. \n\nAs for weaknesses: I wish the paper had a more thorough analysis, even if qualitative, to show why exactly this is working. The depth maps of the dogs in Figure 6 are somewhat helpful, but why so few, and why just dogs? I would like to see a random sample of depth maps, to get a feel for the normal behavior. (I found a few more in the supplementary, but for me the 3D visualizations there are less clear than the depth maps.) Also, analysis of the camera poses seems to be completely missing. I may have missed it, but I'm not sure the pitch/yaw/roll estimation was shown to be helping. Maybe the model is only estimating some basic \"intrinsics\", to help with the scaling as the points go to 3D. I think we expect the poses (extrinsics) are doing some canonicalization of the input imagery -- registering them closer to a common pose. Besides adding the basic ablations, it would be great, for example, to get a dataset with pose annotations, do some self-supervised learning, and then show that the estimated camera poses have some mapping (even if weak) to the real pose distribution. \n When you use multiple 3DTRL locations (in the supp), do you share the depth and pose estimates across all of them? \n I think the authors forgot to write a limitations section, but I think they could resolve this easily. ", " The paper proposes 3DTRL, a module in ViT which helps the learning of viewpoint-agnostic representations. After ViT extracts image patches to tokens, 3DTRL estimates a depth for the whole patch, as well as camera extrinsics. Then the features are transformed into 3d tokens, and feed into the next transformer layers. Experiments are conducted on both image classification and multi-view video alignment. 3DTRL improves the performance on more viewpoints. Strengths:\n\n- It is a neat idea to add an additional module to transformer so that transformer can learn viewpoint-agnostic features. Pseudo depth estimation on patches makes a lot of sense.\n- My primary concern after reading the idea is whether pseudo depth estimation and camera pose estimation can really learn any meaningful information. Especially, the pseudo depth estimation is just 2 MLP layers. And it is addressed well in Fig 6 and supp Fig 7. I appreciate it!\n\n\nWeaknesses:\n\n- The pseudo-depth prediction seems to have scale ambiguity -- how does the model learn metric depth without supervision? Do you enforce scale-invarance somewhere?\n- The learned camera pose also seems to have ambiguity. Take Fig 7 as an example. It seems multiple camera trajectories are reasonable in both cases. We only care about the relative camera pose so the absolute camera pose does not matter. I'm not sure how networks learn an absolute camera pose in this case. \n- The improvement on large datasets such as ImageNet is relatively small (around 0.2%). It probably because ImageNet is too diverse and does not benefit from viewpoint-agnostic representations. At the same time, it works well on ImageNet-perturbed. However, these perturbed images may have additional cues to learn geometric transformations based on the black boundary.\n - Is it possible to evaluate the learned camera extrinsics R and T? At least in simulators the ground truth camera pose should be available.\n- Do you notice any failure modes introduced by 3DTRL? \n- The paper writes the focal length c is set as a constant hyperparameter. What do you finally use? I guess you might use different c for different datasets since c is available for synthetic datasets? Limitations are not discussed in the paper.", " This paper proposes 3DTRL, a 3D-aware component that can be plugged into transformer-based supervised/selfsupervised learners. In particular, 3DTRL utilizes the tokens generated by the previous layers, and uses per-token MLP to predict psuedo depth values for each token. Then, a MLP-based camera pose estimator uses all the tokens to predict the euler angles and camera translation. In the end, a 3D embedding is used to generate embeddings for each token given their predicted camera-space coordinates. The following transformer layers will therefore be 3D aware. \nTo support the validity of this proposed module, extensive experiments are conducted. The three main tasks are image classification under prespective transform, video alignment under different viewpoints, and activity classification. The proposed module only induces slight increase in parameter counts and memory usage, but improves the final result by a sigificant margin. Strengths\n1. Originality:\n\nThe proposed module is novel from the reviewer's perspective. It's simple in nature, imposing camera transformations as a structual prior into the transformers. \n\n2.Quality:\n\nI find this paper of good quality. It presents all the necessary details and experiements to support the calim, and addresses the design choices well through ablation studies. \n\n3. Significance:\nI find this paper significant, and hopefully to the community as well. The idea itself is simple, yet proves to be effective. In addition to the applications showed in this paper, it may also be helpful for areas such as unsupervised depth/pose estimations, especially how they could benefit from large image/video datasets without annotations. \n\n4. Clarity:\nI find this paper easy to follow and well written.\n\n\nWeakness:\nThis is not a major issue, but I'm not convinced that the predicted depth/camera poses are meaningful in terms of explaing the image geometrically. The circular tokens in Fig.1 doesn't seem to expain the image, and there's less visualization on the predicted pose. I do not find this very concerning though, since the prior is really weak as they are only structrual in nature. 1. I simply find it interesting that imposing a structural prior is already effective with a significant margin. Does the choice of camera pose parameterization affect the results at all? Euler-angles have gimbal locks and therefore induce ambiguity, which might be harmful.\n\n2. The camera pose results visualized in the supp. is quite interesting. Even though it might be a strech, is it possible to compare how well the camera poses improved over the course of training? As argued in the paper, precise depth and camera pose is not required to make good classification decision, but how well shoud they be to make the 3D embedding meaningful? The authors do not discuss the limitation of the method. It is important to have a up-front discussion of the limitations, and potential bells and whistles. ", " In this paper, the authors claim that deep learning-based models applied to 2D images lack 3D information that is easily understood by human vision. In turn, the authors focus on transformer-based architectures. The authors propose a 3D Token Representation Layer (3DTRL), a plug-and-play module for transformer-based architectures to learn viewpoint-agnostic representation for the visual data. Authors do this by lifting 2D token locations to 3D by a pseudo depth estimator, and a pseudo camera parameter estimator. Through experiments on Image Classification datasets and multi-view video alignment datasets, authors show the effectiveness of 3DTRL modules being robust to viewpoint changes.\n Strengths:\n\n1. Authors' motivation for a need to embed 3D information in tokens is clear\n2. Proposed 3DTRL is a plug-and-play module, that can be embedded into any transformer architecture\n3. Authors show experiments on multiple datasets to show the effectiveness of the module\n\nWeakness:\n\nWhile the authors' motivation for the need to embed 3D information into learning visual representation is clear, the proposed approach is not convincing for multiple reasons, \n1. Authors have proposed a pseudo depth estimator and pseudo camera parameter estimator to lift 2D token locations to 3D. While I can understand 2D image information being used to learn depth, in a free-view image analysis scenario, I don’t think camera parameter estimation makes sense. I.e. We don't even know the reference, and how the R,t of one image is related to the R, t of the other image in CIFAR-10? In 3D multi-view datasets there are approaches that directly regress camera poses and scene coordinates ([1], [2]). However, in this case, the authors are trying to embed 3D scene information in model weights, while in free-viewpoint images that don't have a common reference frame, the proposed approach is most likely not valid.\n\n2. The transformers are effective with large amounts of data, and the minor improvement of 3DTRL on the ImageNet-1K dataset shows that transformers already become effective without 3DTRL. I understand that authors have shown a validation set on view-point perturbed data with the perspective transformation of the validation-set of ImageNet-1K, but this is not a valid viewpoint change, and we cannot trust that this will really work in multi-view scenarios.\n\n3. While I can understand this approach being suitable for multi-view video alignment, since there we are fine-tuning it for each task and 3DTRL can build implicit representation, I cannot see this as a generic module for learning viewpoint-agnostic representation.\n\n4. Also, from the depth-map visualization (Fig. 6), I think it is just learning to pay attention to the primary parts of the class (e.g. for each of the dog images, the eyes, ears, and noses have very low value). I cannot see the depth being learned here.\n\n5. Authors mention that this is a generic plug-and-play (L14, L48), but all the experiments are only with DeiT. More experiments with different architecture are needed to support the claim.\n\nThe notations and math are also unclear at many places:\n1. E.g. in eq (2) authors represent (u,v) as center pixel coordinates. While in eq (3), (u,v) are token 2D locations.\n\n2. The camera model presented in eq (2) is not a standard pinhole camera model. In standard pinhole camera if dn is a depth, then xn = dn*un/c , yn = dn*vn/c , zn=d (considering camera frame and image frame matches at (0,0)) \n\nSome typos/writing mistakes\n\nL55: (u, v) are pixel coordinates: (u, v) are not any pixel coordinates, they are coordinates of the center pixel of the image.\nL104: which is beneficial for later representation learning → which is beneficial later for representation learning (or did you mean, which is beneficial for better representation learning)\nL133: raw → yaw\nFig4: 3DTPL → 3DTRL\n \n\nReferences:\n[1] Li, Xiaotian, Juha Ylioinas, and Juho Kannala. \"Full-Frame Scene Coordinate Regression for Image-Based Localization.\" Robotics: Science and Systems Conference. University of Queensland, 2018.\n\n[2] Brachmann, Eric, and Carsten Rother. \"Learning less is more-6d camera localization via 3d surface regression.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.\n\n[3] Liu, Ze, et al. \"Swin transformer: Hierarchical vision transformer using shifted windows.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. How did you come up with eq (3)? This is not a standard pinhole model\nHow would 3DTRL compare with doing some perspective augmentation while training? Isn’t that also learning viewpoint agnostic representations?\nDid you try 3DTRL with the new transformer architectures such as Swin [3]? Does 3DTRL still help with such architectures?\n The authors have not mentioned any limitations or negative social impact of the work. The clear limitation of the 3DTRL module I see is it is not generic module, and mostly applicable for data with multiple views of the same scene. (i.e. multiview video alignment)\n", " This work presents 3DTRL, a plug-and-play module aiming to learn viewpoint-agnostic representations by incorporating 3D geometric constraints in the learning process. The module predicts both the (pseudo) depth estimate and the camera parameters such that it can project the learned tokens into 3D world coordinates. The authors have shown that the module, when used together with visual Transformers, can achieve better performance in a range of visual understanding tasks, including image classification, multi-view video alignment, and cross-view action recognition. [Strengths]\n\nI like the idea of incorporating 3D geometric information into the learning pipeline. The added module is simple to use and only casts negligible overhead regarding the parameter count and computation time but is seemingly effective in boosting the performance of existing Transformer architectures.\n\nThe authors have conducted extensive evaluations on a range of tasks involving images and videos. The proposed module is effective in all evaluated tasks and has consistently improved over the existing baselines. The authors have also shown a range of ablation studies justifying the design of the added module regarding the architecture, the type of 3D information to infer, and the way to incorporate 3D geometric constraints. These are all important lessons that can be helpful for the readers to have a deeper understanding of the method.\n\n\n[Weaknesses]\n\nThe primary concern I have about this paper is that I do not fully understand how well the added module can infer the 3D information.\n\nFor example, the proposed module predicts the depth information and the camera parameters from a single camera, which will for sure have a lot of ambiguities (i.e., different combinations of depth and camera parameters could explain the same visual observation). It is unclear how the module resolves the ambiguities and how such ambiguities factor in the final performance.\n\nI'm also not convinced about the module's 3D inference capability. It is great that the authors show the 3D prediction results in Figure 8 of the supplementary materials. However, the results do not seem to be very satisfying. I know that this is a very hard task, but no matter the size of the object, the distance from the camera to the object, and the distance between the foreground and the background, the 3D prediction results all seem similar without much differences from each other. If the predicted 3D information does not reasonably reflect the underlying 3D contents, it is a bit hard for me to justify its benefit to the downstream task performance.\n\nCorrect me if I am wrong, but the authors seem to predict only the camera extrinsic parameters but ignore the intrinsic parameters. However, the dataset will for sure contain images taken by cameras with different intrinsic parameters, which will greatly impact how the 3D points in the camera frame are projected to the 2D image plane; thus, I'm curious about how much the negligence of the intrinsic parameters influences the final performance. \n\nThe authors have also ablated the model by using different numbers of 3DTRLs in the architecture and have shown improved performance when using multiple modules. It would be thus quite interesting to see how the inferred 3D information changes as the model use different numbers of 3DTRLs.\n\nI'm also not entirely sure how the authors specify the origin of the world coordinates; Or is the origin purely learned in an unconstrained and unsupervised way (i.e., whatever coordinates produced by the neural network)?\n\nDue to the reasons above, I strongly suggest the authors show results in scenarios where we have access to the ground truth 3D information (e.g., simulation or some 3D scanned scenes) and systematically evaluate how well the proposed module can recover the underlying 3D contents.\n\n\n[Minor]\n\nLine 222: Figure 3 should be Table 3, but however, Table 3 currently is a figure. There might be some inconsistencies in the LaTeX code. Please see the questions in the weaknesses section. The authors did not explicitly discuss the limitations of the proposed method, but more discussions/evaluations about the model's ability to infer 3D would be needed.\n\nI do not see any potential negative societal impact from this work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4, 4 ]
[ "gd5W4Q64-Pw", "6BL7p-Zg8up", "dShtErTbYB", "7GNcLa-FcpF", "5hb5_bsZums", "401wiYJa0bV", "dShtErTbYB", "5vEranXF7JV", "KEOZmmZyIj4", "6Z2HwyZhJM9a", "Sq_ugIhySwe", "Y3SlDoA5icz", "DVSld8rAqGZ", "nips_2022_YBsLfudKlBu", "401wiYJa0bV", "pBeJiy5VXoR", "pBeJiy5VXoR", "iAONypffKBW", "ZvUcLa9Lu2Q", "nrW0xbfFwZw", "nips_2022_YBsLfudKlBu", "nips_2022_YBsLfudKlBu", "nips_2022_YBsLfudKlBu", "nips_2022_YBsLfudKlBu", "nips_2022_YBsLfudKlBu" ]
nips_2022_fVslVNBfjd8
Does Self-supervised Learning Really Improve Reinforcement Learning from Pixels?
We investigate whether self-supervised learning (SSL) can improve online reinforcement learning (RL) from pixels. We extend the contrastive reinforcement learning framework (e.g., CURL) that jointly optimizes SSL and RL losses and conduct an extensive amount of experiments with various self-supervised losses. Our observations suggest that the existing SSL framework for RL fails to bring meaningful improvement over the baselines only taking advantage of image augmentation when the same amount of data and augmentation is used. We further perform evolutionary searches to find the optimal combination of multiple self-supervised losses for RL, but find that even such a loss combination fails to meaningfully outperform the methods that only utilize carefully designed image augmentations. After evaluating these approaches together in multiple different environments including a real-world robot environment, we confirm that no single self-supervised loss or image augmentation method can dominate all environments and that the current framework for joint optimization of SSL and RL is limited. Finally, we conduct the ablation study on multiple factors and demonstrate the properties of representations learned with different approaches.
Accept
The paper studies an important question, and extends the contrastive reinforcement learning framework to jointly optimize SSL and RL losses. The paper also experiments with various self-supervised losses to empirically validdate the main claim -- "the existing SSL framework for RL fails to bring meaningful improvement over the baselines only taking advantage of image augmentation when the same amount of data and augmentation is used" The paper presents a surprising result and hopefully provides an interesting platform for others to build on. The main novelty of this work in not necessarily in algorithms or systems, but rather providing a thorough experimental evaluation of some of the insights that are known either as 'dark knowledge' or implicit insights from an aggregation of number of previous papers. In that it does a good job. The reviewer opinion is split, and rightly so, given the flaws in the presentation and experimentation. - conclusions are not very rigorous - Most tested SSL methods in this paper are naively applied to RL The rebuttal however has yielded a stronger manuscript, which is likely to be useful to the community. The AC strongly advises the authors to make the claims more objective, and less definitive, opinionated, or catchy/click-bait. Further the manuscripts should be revised to include the gist of the discussion in the main paper, and addditional clarifications in the appendix.
train
[ "9ezX7T0esGx", "ApIV2hvd7y", "jfKlSogOIoS", "7P09AEGhZs2", "pLwh6VgOmLN", "wuF0nadNjZl", "eJ9JPd-kpXvj", "ZhQKERoTDnA", "VpE61bGVeI7", "RM7Ist8vmsV", "qkHH_xXpVT8", "_HRjKD_tEzy", "KJTFjaZxbL", "WICYiWssVoY" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for detailed comments and the revised manuscript. I believe the revised paper organization and the added experiments have improved the submission considerably. Therefore, I increased my score to recommend acceptance.", " Dear reviewer GjgB,\n\nThis is a kind reminder that we revised the paper with additional experiments as you suggested. Could you please check it out and let us know if it has addressed your concerns?\n\nThanks", " We thank you for your comment; we added the curves you suggested. Please find them at the end of the newly revised appendix in the supplementary material (Fig.33-37).\n\n- > I still can hardly catch the research insight of this paper. The empirical study is extensive, but what can it deliver to RL community? Is there any practical advice for real-world RL applications? Simple augmentation can improve performance on several research benchmarks such as DMControl-100k, but does it really indicate that SSL is not useful or \"augmentation is all you need\"? I think it's a big inference which needs more evidences.\n\nWhat we provide in this paper is the limitations and potential of the current self-supervised losses (and their combinations). We believe this paper is delivering a serious research question to the Computer Vision (and self-supervised-learning) community. We are illustrating the limitations of the existing SSL approaches and suggesting the necessity of further investigation and development for RL.\n\n1. We show that CURL (as well as many self-supervised approaches) is not the golden method that works in all environments.\n2. This is unlike the observations in computer vision where self-supervision has shown to be (almost) always beneficial in many tasks including object classification, detection, segmentation, video recognition, and so on [6, 7, 9, 11, 16, 18, 22, 23, 33, 34, 39, 42, 54, 56, 57, 63, 72]. Our observation will come as a surprise for researchers in the computer vision community and motivate them to further investigate/develop SSL for RL.\n3. We are not suggesting that the contrastive loss is always useless. As we also show with our real-robot experiments (Fig. 9, L308, and Appendix A.5), a well designed combination of self-supervised losses (ELo-SAC) could outperform augmentation-only methods (DrQ, SAC-Aug(100)) in a real environment. Although CURL itself was insufficient to do so in our environment/task, we believe this observation matches with Appendix G of CURL suggesting the potential of self-supervision. Such observations suggest that one should not overlook the importance of self-supervised losses, especially for real-world robot applications. We will clarify this further in the final version of the paper.\n4. Our detailed observations cover 16 environments from three different benchmarks and provide a solid reference for people when designing their own methods. Note that these include some of the standard environments (six envs in DMC and seven envs in Atari) popularly used in previous SSL + RL papers, which we believe will also be used in future papers.\n\n---\n\n- > \"Augmentation baselines such as RAD (basically equal to CURL w/o the auxiliary task) beat CURL? \" CURL authors have discussed this issue in Appendix G in their arxiv paper. I stick to my original perspective that augmentation baselines beat CURL only in some cases. I use the CURL official code to run the CURL and CURL w/o the auxiliary task on the hard tasks on DMControl. The results from my side show that CURL's contrastive loss is effective. I hope that the author can show the episode reward - step curves of the hard tasks.\n\nThe detailed comparison between CURL and RAD on harder tasks: in our first revision, we included the two hard environments the reviewer mentioned (Table 11). It will be great if the reviewer can please check and comment on it, as the results show a similar trend to the other DMC tasks. We are having difficulty finding the source (a paper or technical report?) of results the reviewer is describing. Which harder tasks do you mean? Are they other than the two the reviewer initially suggested? What is the exact setting? Could we please get access to the report containing the results? Without these, it is really difficult for us to fully investigate and address the concern. \n\nWe hope the reviewer finds these answers helpful.\n\nThank you.\n", " I appreciate the detailed response. The response addresses part of my concerns. Further concerns:\n- \"Augmentation baselines such as RAD (basically equal to CURL w/o the auxiliary task) beat CURL? \" CURL authors have discussed this issue in Appendix G in their arxiv paper. I stick to my original perspective that augmentation baselines beat CURL only in some cases. I use the CURL official code to run the CURL and CURL w/o the auxiliary task on the hard tasks on DMControl. The results from my side show that CURL's contrastive loss is effective. I hope that the author can show the episode reward - step curves of the hard tasks.\n- I still can hardly catch the research insight of this paper. The empirical study is extensive, but what can it deliver to RL community? Is there any practical advice for real-world RL applications? Simple augmentation can improve performance on several research benchmarks such as DMControl-100k, but does it really indicate that SSL is not useful or \"augmentation is all you need\"? I think it's a big inference which needs more evidences.\n\nMany thanks.", " ### Technical details\n\n- > It would be much better if the authors use off-the-shelf SSL for RL methods. For example, when the authors study BYOL or predicting future, they should consider using SPR\n\nWe would like to kindly remind the reviewer that CURL and SAC+AE are off-the-shelf SSL for RL methods. SPR[65 in the revision] was not considered because it doesn’t have an official implementation for DMControl.\n\n- > Line 235: why study 86x86 and 87x87?\n- > PSO seems to be an offline search method while RL is an online and dynamic learning process. The search results are likely to be sub-optimal.\n\nBased on these two questions we think there is a misunderstanding on our ELo-based methods. In this paper, we use evolutionary search to find the optimal combination of multiple self-supervised losses and the image augmentation that works best for them. 86x86 and 87x87 are the optimal image size before random crop for the online network and the target network, found by ELo-SAC (Table 5 before revision and Table 6 in the revision).\n\nDuring the RL, the weights of self-supervised losses do not change. As described in section 3.3, the RL process can be regarded as an objective function that maps the input combination of self-supervised losses and image augmentation to a reward score. \n\nPSO is introduced to optimize the objective function in a higher level, which does not affect the training process of each RL agent.\n%In this revision, we propose an upgraded version of ELo-SAC which has better initialization and search space. We encourage the reviewer to go through Section 3.3 and Appendix A.2.4 to have a better understanding of our method. We are sorry for the confusion and would like to take any further questions.\n\n- > Some experiments or methods are not insightful. For example, RotationCLS and ShuffleCLS can cause ambiguities. An input image and the rotated image (with 180 degree) both make sense in some environments, then how do the networks make the discrimination?\n\nWe agree that in rare cases, RotationCLS could cause ambiguities in some degrees due to vertical symmetry. However, in our setting, we rotate the image by 0, 90, 180, and 270 degrees and send all the rotated images to the classifier at the same time. In that case, the images rotated by 90 and 270 degrees can still provide a meaningful training signal. Take CURL as an example, there is always a chance that a negative sample in the mini-batch is quite similar to the positive samples. But CURL manages to overcome this issue, showing that the network is robust to the training samples. \nAs for the ShuffleCLS, since the image representation is conditioned on action representation, we could not see any ambiguities here.\n\nThank you.\n", " We thank the reviewer’s feedback and would like to organize our comment on the following topics.\n\n### Validity of the conclusion\n\n- > Readers can hardly learn useful information from this paper. The conclusions that the authors make vary a lot on the test environments, e.g., on DMControl and Atari.\n\nWe would like to clarify that the goal of our paper is to share our observations on SSL+RL in multiple aspects and to provide insights and references for future researchers. \nThe fact that different test environments require different properties, makes the relative performance change a lot among different methods, as we concluded, there is no golden approach that works the best in all cases. We honestly share our observations and would like to remind future researchers not to overlook the difficulties, and we think both reviewer GjgB and reviewer cRC9 successfully caught this point.\n\n- > The conclusions are not very rigorous. For example, the authors test only six environments on DMControl and make the conclusion that augmentation is much more effective than SSL, e.g., CURL is much worse than RAD in Figure 4. However, I tested CURL and RAD on several hard DMControl tasks such as Hopper-hop and Reacher-hard and found CURL is more effective than RAD.\n\nIn this revision, we test two hard DMControl tasks hopper-hop and reacher-hard as suggested by the reviewer. However, we could not reproduce the observation that the reviewer mentioned (Table 13). And our observations match our existing conclusions. We kindly ask the reviewer for the source (a paper or technical report) of his/her acclaim and more technical details, so that we could better investigate this mismatch and address the concern. \n\n- > Augmentation baseline like RAD may be better than SSL baseline such as CURL on only the six DMControl benchmark environments (in the paper), but that does not mean augmentation is always better than SSL on DMControl, especially on hard DMControl tasks. Therefore, I think the conclusion is biased.\n\nAs for the test environments, on DMControl, we followed the setting of the previous papers including CURL, RAD and DrQ, which used these six environments. \nIn the revision, we included two additional environments suggested by the reviewer. \nBesides that, we have 7 atari environments and 1 real world environment which makes the total number of environments come to 16 and cover three benchmarks. Considering the number of environments in the previous papers (e.g., 10 environments in Chen et al. [10]), we believe we have a sufficient number of environments to avoid biased conclusions. \n\n---\n\n### Organization of the paper\n\n- > The organization of this paper needs to be improved. Too many unimportant contents are put in the main text while some important figures/tables and analysis are in the appendix.\n\nIf the reviewer could provide more specific suggestions, we would appreciate it very much and we would follow the suggestions as much as possible, like what we did with reviewer GjgB and reviewer cRC9. \nFollowing their suggestions, in this revision, we bring ablation study on image augmentation to the main section and take the pretraining part out. We would like to hear your further comments.\n\n\n- > Besides, \"tanh\" and many augmentation tricks are presented. They are a little bit off the point of the paper \"SSL impact on RL\".\n\nWe want to point out that though these tricks are not directly related to SSL, however, they play an important role in the agent performance.\nthe inconsistent of such tricks will bias the observations and lead to unfair comparison. Reviewer GjgB and we all agree that it is important to take tanh and other factors into consideration.\n", " We appreciate your well-understanding of our paper and valuable suggestions. \n\nIn this revision, as you suggested, we include the reward-step curve for six DMControl tasks (Appendix A.4, Fig.23-32) and take the pretraining section out of the main text.\n\nRegarding your question on novelty, we would like to clarify that the goal of our paper is to share our observations on SSL+RL in multiple aspects and to provide insights and references for future researchers. \nProposing a new method is not the first priority of this paper, though we did investigate multiple self-supervised losses which were never applied to RL with image augmentation before. \n\nBesides that, we explore manually combining two losses (Appendix A.2.3) and automatically find the optimal combination of multiple self-supervised losses (Section 3.3 and Appendix A.2.4). We believe such observations can benefit the community and inspire further investigation in this direction.\n\nThank you.", " ### Answers\n\n- > It would be good to consider additional RL algorithms (e.g., DDPG, PPO, etc.)\n- > The paper considers DrQ but not DrQ-v2. Is there are reason for not including DrQ-v2 in the study? (SAC vs DDPG?)\n\n\nFinally, we didn’t include DrQv2 mainly because as you mentioned, it has a different learning method DDPG, and a set of different hyper parameters. \n\nIn this paper, we want to compare apples to apples and focus on a unified framework to deliver a fair comparison.\nAt the same time, due to the limited computation, we focused on studying SAC and rainbow, which has been mentioned in the limitation section (A.8) .\n\nAnother reason is that DrQ is already good enough to outperform other methods, showing the limitations of the existing joint SSL+RL framework.\n\nWe now use “learning framework” to refer to methods like SAC and Rainbow. Thanks for pointing it out.\nWe also update the limitation section (A.8) based on your suggestion.\n\nThanks,\n", " ### Newly added experiments\n\nFollowing your suggestion below.\n- > I think that it would be nice to provide a bit more insight on why that might be and/or what specific aspects may be the reason for that\n\nIn this revision, we have more ablations and new methods for pairwise learning + RL\n\nWe investigate the impact of the two image augmentations, random crop (L264, Section 4) and translate (Appendix A.3.1). We reported found patterns regarding the hyper-parameter choices that could guide hyper-parameter selection.\n\nWe then investigate the impact of the learning rate of self-supervised losses (Appendix A.3.2, Figure 10).\n\nWe further study the impact of visual encoder architecture, and this study is three-folded:\n * we first replace two conv layers in the encoder with a residual block and Table 9 shows that such replacement can slightly and consistently improve performance. (L274, Section 4, Figure 5)\n * we then add additional linear layers to the end of the encoder and the results show that the additional layers slightly compromise the performance of multiple methods (Appendix A.3.2 Figure 12).\n * Finally we discuss the number of shared layers among SSL branch and RL branch (Appendix A.3.3. Figure 14, 15).\nBesides that, we extend the discussion of tanh which is now at Appendix A.3.4 and Table 9.\nAnd we provide explanation for our observations by analyzing the learned image representation (Appendix A.6)\n\nFor your comment below:\n- > It seems that action information is not considered by the pairwise approaches but is used in the reconstruction and shuffle approaches. It would be good to try to study the impact of action information.\n\nWe propose two novel pairwise learning based SSL+RL methods, CURL-w-Action and CURL-w-Critic, that take action and value network into consideration. \n\nTo be more specific we concatenate the image representation and actor/critic output and apply contrastive learning to this joint representation (L726, Appendix A.2.2). They show marginal improvement compared to the vanilla CURL (Figure 4, Table 10).\n", " We appreciate your interest and understanding of the importance of our topic. \n\n### Revised paper organization\n\nBased on your suggestions we uploaded a revised version of the paper.\nWe took out the section on the pretraining framework and now we would like to clarify the relationship between the sections in our new organization.\n\nIn section (1) as you summarized, we start our paper from the existing single self-supervised loss + RL methods, like SAC+AE, CURL. We extend such a framework by combining it with more SOTA self-supervised losses, to be more specific, one loss for each time. Many of the losses are never tested in such a framework before, especially when image augmentation is also considered.\n\nAfter that, we further extend the framework to combine multiple self-supervised losses, which is barely mentioned by previous papers. Since many observations indicate that it is not feasible to manually tune the weights of each self-supervised loss, we propose ELo-SAC, ELo-SACv2 (newly added in revision Appendix A.2.4), and ELo-Rainbow to automatically search the best way to combine multiple losses. This corresponds to your section (2). In summary, (1) and (2) are highly related, and (2) should be regarded as a natural extension of (1).\n\n---\n\n### Clarification\n\nRegarding the weakness, you mentioned that:\n- > If my understanding is correct, the main findings of the experiments is that the RL + SSL techniques do not seem to outperform RL + data augmentations in tested settings. This general finding in itself is not new and was reported by DrQ and RAD.\n\nYour understanding is correct and we agree that this conclusion has been partially reported by DrQ and RAD.\n We use “partially” because in their paper they only compete with the existing methods and their goal is to prove their advantages. \n\nHowever, in our case, we thoroughly investigated multiple self-supervised losses and took image augmentation into consideration, which makes a fair comparison among multiple methods. \nTherefore our observations are more robust.", " We thank all the reviewers for their helpful suggestions and valuable insights. Based on the feedback, we revised the paper with additional experiments and a better organization.\n\nThe new experiments suggested by the reviewers include\n* Detailed ablations on multiple factors that contribute to policy learning, including image augmentation (Fig.6, Fig.11), encoder architecture (Fig.5, 13, 14, 15), and the choice of hyperparameters (Fig.12). (Please check the middle of Section 4 and the whole Appendix A.3)\n* Reward- step curve for six DMControl tasks, which can better demonstrate the difference between all tested methods (Appendix A.4, Fig.23-32).\n* Two novel pairwise learning based SSL+RL methods, CURL-w-Action and CURL-w-Critic, take policy and value network into consideration. (Appendix A.2.2)\n* A new version of evolutionary search (ELo-SACv2) with better initialization and search space. (Appendix A.2.4)\n* More results in the real world environment (end of Section 4) and the pretraining section (Appendix A.5).\n\nThe organization of this paper is also revised based on the reviewers’ suggestions: we added more ablation results to the main section and took the pretraining part out. \nWe also fixed typos and citation issues as much as possible.\n\nAgain we thank all the reviewers for their contributions and suggestions, and hopefully, our revised paper can deliver our observations more smoothly and benefit future researchers.\n\nThank you.", " The paper performs an empirical study of self-supervised learning for RL from pixels. Specifically, the study considers different SSL objectives (e.g., MoCo, BYOL, etc.), evaluates them using the RL + auxiliary SSL objective framework (like in SAC+AE and CURL), and compares them to RL with data augmentations alone (e.g., RAD and DrQ). The results suggest that auxiliary SSL objectives do not lead to clear gains in tested settings. The paper also performs experiments on using evolutionary search for combining different SSL losses and using SSL pre-training rather than auxiliary SSL objectives. Strengths:\n- The paper considers and important question and would be of interest to the community\n- The study perform extensive experiments across different settings\n- The paper report robust performance estimates \n\nWeaknesses:\n- Overall, it feels that the paper is trying to do too much by studying three related but different enough aspects and not developing any of them in sufficient depth: (1) RL with auxiliary SSL losses, (2) evolutionary search for combining SSL losses, and (3) SSL pre-training for RL. Most of the paper focuses on (1) and that component is the most thorough of the three (though still leaves a number of questions unanswered; please see the questions section). Sections (2) and (3), and (3) in particular, feel like they have not been explored enough in the current paper and should perhaps be studies of their own (e.g., most of the questions I asked in the context of (1) would equally apply to (3) + additional questions specific to (3), e.g., the choice of pre-training data).\n- I will focus on the weaknesses of (1). If my understanding is correct, the main findings of the experiments is that the RL + SSL techniques do not seem to outperform RL + data augmentations in tested settings. This general finding in itself is not new and was reported by DrQ and RAD. Thus, I think that it would be nice to provide a bit more insight on why that might be and/or what specific aspects may be the reason for that (see the questions section below).\n- Experiments on the impact of different data augmentations and their hyper parameters in the appendix are nice. It would be good to focus more on that and bring / discuss some of these aspects in the main text. And to try to develop that aspect a bit more (e.g., instead of just saying that data augmentations have a big impact on performance try to provide a bit more specific findings)\n\nOverall, the paper asks an important question and I like it. However, I feel that in its current form it is trying to address too many different aspects (1-3 above) and does not develop any of them in sufficient detail. I think that the paper would benefit from focusing more on RL + auxiliary objectives (e.g. including additional experiments from the questions section) and leaving the remaining ones for future studies. Questions/suggestions:\n- It would be to expand the discussion of the impact of data augmentations (please see my comment in the previous text box)\n- Similarly, it would be good to include more experiments / discussion on the impact of different hyper parameter choices (we know from prior work, e.g. DrQ-v2, that this can have a very large effect)\n- It would be good to consider the impact of the neural network architecture on performance (e.g., different vision encoders, encoders of different size, etc.)\n- It would also be good to study what is the best way to pass / process vision embeddings to policy / critic heads (e.g., like the impact of the tanh discussed on L238)\n- It would be good to consider additional RL algorithms (e.g., DDPG, PPO, etc.)\n- The paper considers DrQ but not DrQ-v2. Is there are reason for not including DrQ-v2 in the study? (SAC vs DDPG?)\n- It seems that action information is not considered by the pairwise approaches but is used in the reconstruction and shuffle approaches. It would be good to try to study the impact of action information.\n\nMinor:\n- The term \"SAC backbone\" may be a bit confusing. People typically use backbone to refer to a neural network architecture used for feature extraction (e.g., common in vision). Maybe something like (learning) \"architecture\", \"framework\", or \"meta architecture\" would be clearer in this context. Some discussion of limitations in the appendix (A.9). I think it would be nice to see a more detailed limitation sections perhaps focusing on potential confounding factors and/or methodology aspects that might influence the conclusions.", " This work aims at answering the question of to what extent can self-supervised learning (SSL) objectives help with online Reinforcement Learning. The authors build a consistent experimentation framework around two popular RL algorithms designed for continuous and discrete action spaces respectively. Experiments in this work are performed over a significant collection of approaches, including state-of-the-art SSL methods, data augmentation methods, and a custom combination of multiple SSL objectives found through evolutionary search. The paper presents results from multiple synthetic domains and a real robot experiment. The authors come to the conclusion that there is not a single golden SSL approach that works the best in all cases, and that the performance of SSL approaches varies across different environments. Strengths:\n- The paper is well organized and well written.\n- The authors include an extensive collection of SSL and data augmentation methods for comparison in the experiments.\n- The real robot experiment is a good addition to showcase how these methods work with the presence of real-life camera/lighting noises.\n- The work provides valuable insight into how the instance-based SSL methods from state-of-the-art vision models don't seem to be super useful in the RL domain.\n\nWeaknesses:\n- This work lacks novelty in terms of methods. The authors utilize evolutionary search, but the resulting model is still a weighted mixture of existing methods.\n- The discussion on pretraining does not seem necessary because it relies on additional data at the very beginning, as the authors have pointed out, making its setting different from the rest of the methods in comparison. It's still good as an appendix section. - The paper only shows each model's final performance. Can the authors also include some reward vs. training-step plots? Are there other observations to make from those curves, e.g. ramp-up speed? The authors did a great job making a fair comparison among different methods and not overclaiming their contribution. Some potential limitations are discussed above in the weaknesses section.", " This paper presents an empirical study on the effect of self-supervised learning (SSL) in online and pretraining RL from pixels. Through the experiments on DMControl, Atari and a real-world robot environment, the authors make the following conclusions (i) the existing SSL framework for RL fails to bring meaningful improvement compared with the baselines using data augmentation techniques, while using the same amount of data and augmentation; (ii) the combination of SSL losses for RL also do not bring much gain; and (iii) no single self-supervised learning or image augmentation method dominates all environments. Strengths:\n1. The authors made a lot of experiments. Some results are interesting.\n2. The paper is easy to follow.\n\nWeakness:\n1. The conclusions are not very rigorous. For example, the authors test only six environments on DMControl and make the conclusion that augmentation is much more effective than SSL, e.g., CURL is much worse than RAD in Figure 4. However, I tested CURL and RAD on several hard DMControl tasks such as Hopper-hop and Reacher-hard and found CURL is more effective than RAD. Augmentation baseline like RAD may be better than SSL baseline such as CURL on only the six DMControl benchmark environments (in the paper), but that does not mean augmentation is always better than SSL on DMControl, especially on hard DMControl tasks. Therefore, I think the conclusion is biased.\n2. Most tested SSL methods in this paper are naively applied to RL. As far as I know, SSL methods applied to RL framework often need specific design. It would be much better if the authors use off-the-shelf SSL for RL methods. For example, when the authors study BYOL or predicting future, they should consider using SPR [56].\n3. Readers can hardly learn useful information from this paper. The conclusions that the authors make vary a lot on the test environments, e.g., on DMControl and Atari.\n4. The organization of this paper needs to be improved. Too many unimportant contents are put in the main text while some important figures/tables and analysis are in the appendix.\n Questions:\n1. Line 235: why study 86x86 and 87x87?\n\nSuggestions:\n1. Some experiments or methods are not insightful. For example, RotationCLS and ShuffleCLS can cause ambiguities. An input image and the rotated image (with 180 degree) both make sense in some environments, then how do the networks make the discrimination? Besides, \"tanh\" and many augmentation tricks are presented. They are a little bit off the point of the paper \"SSL impact on RL\".\n2. PSO seems to be an offline search method while RL is an online and dynamic learning process. The search results are likely to be sub-optimal.\n3. Studying which SSL benefits which type of tasks is more interesting and insightful.\n4. Line 230: \"Random\" ->\"random\"\n5. SPR [56] has published and the authors should cite the proceeding version. The authors have addressed some limitations." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "_HRjKD_tEzy", "_HRjKD_tEzy", "7P09AEGhZs2", "wuF0nadNjZl", "WICYiWssVoY", "WICYiWssVoY", "KJTFjaZxbL", "_HRjKD_tEzy", "_HRjKD_tEzy", "_HRjKD_tEzy", "nips_2022_fVslVNBfjd8", "nips_2022_fVslVNBfjd8", "nips_2022_fVslVNBfjd8", "nips_2022_fVslVNBfjd8" ]
nips_2022_8FuITQn6rG3
CRAFT: explaining using Concepts from Recursive Activation FacTorization
Despite their considerable potential, concept-based explainability methods have received relatively little attention, and explaining what’s driving models’ decisions and where it’s located in the input is still an open problem. To tackle this, we revisit unsupervised concept extraction techniques for explaining the decisions of deep neural networks and present CRAFT – a framework to generate concept-based explanations for understanding individual predictions and the model’s high-level logic for whole classes. CRAFT takes advantage of a novel method for recursively decomposing higher-level concepts into more elementary ones, combined with a novel approach for better estimating the importance of identified concepts with Sobol indices. Furthermore, we show how implicit differentiation can be used to generate concept-wise attribution explanations for individual images. We further demonstrate through fidelity metrics that our proposed concept importance estimation technique is more faithful to the model than previous methods, and, through human psychophysic experiments, we confirm that our recursive decomposition can generate meaningful and accurate concepts. Finally, we illustrate CRAFT’s potential to enable the understanding of predictions of trained models on multiple use-cases by producing meaningful concept-based explanations.
Reject
Reviewers generally agreed that this paper is innovative (the decomposition of high-level concepts into sub-concepts in particular sets this paper apart from existing concept-based methods), and appreciated its potential practical utility for the explainability community (for example by providing localization in input space in addition to concepts, which can be used to debug model errors). All Reviewers however also agreed on the main weaknesses, i.e. the limits of the validation of the method (which is restricted to only 1 dataset and lacking rigorous quantitative metrics), and a related lack of clarity in terms of technical motivation and use cases enabled for end-users of the method. Despite some useful clarifications and improvements in the presentation of results provided in the rebuttals and the ensuing exchange, Reviewers were left unsatisfied by the responses addressing the mentioned main weaknesses, which raised renewed concerns about their seriousness. As a result, the discussion after the rebuttal period was marked by the opinion expressed by two Reviewers that the paper's weaknesses outweigh the (albeit clear) merits and that in its current version the paper is not ready for being accepted.
test
[ "ZSNIZn6ZdNM", "_r9C5lzxD-", "7DXwSz-PJ0s", "L4824TFfUi7", "iJfN4-vycaw", "hw7neznBt10", "b_wE-xOwPlU", "FIi5XI345ht", "8rPpZPtnoxI", "2QeD3CmYl7-", "RWgq8DnvFU7", "Yz0llkIbY1R", "anXUsoZmW48", "tgD-EtHK4dS" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \n**W3.**\n\nWe agree and upon acceptance, the extra page will be dedicated to the broader impact and limitations sections.\n\nRegarding the use of labels, we wish to avoid user confirmation bias - i.e. the user has unconscious expectations of what the explanation should look like for a given class before it is generated. To avoid this, the fact that we use model predictions instead of labels allows us to capture concepts that are important to the model, despite not having the correct predictions (e.g. ski suits and dirt are not necessarily part of shovel images but may come from images that the model predicts as shovels).\n\n**W1.**\n\nWe have added some additional examples of the user study to the appendix. \n\n**W2.**\n\n> I think it's at least arguable that previous methods only show \"where\" and not \"what\".\n\nThis is indeed a criticism that is often made to attribution methods and this is the reason why we developed CRAFT: to go beyond the where. Concerning the \"what\", it is true that we appeal to users’ (arguably innate) capability of finding patterns in groups of images to give a name to each concept, it still constitutes an important step forward in explaining the \"what\" when compared to what’s currently available in the literature of post-hoc XAI. \n\nWe have added a clarification regarding this in the limitation section.\n\n\nWe claim to generate human interpretable concepts and for this purpose we performed two controlled psychophysics experiments with humans (more than 70 people), from different audiences (experts, laymen). \nIn addition, non-understandable concepts would have been unable to achieve scores as high as those in the intruder experiment.\nContrary to what is said, we did not ask the participants to choose between sub-concept and concept (we agree that this would not be sufficient) but we set up the experiment of asking humans which cluster was most likely to contain an image - that image itself belonging to the cluster and a subcluster. The results show that humans find the sub-clusters more coherent / likely to contain the image. \n\nWe have clarified this point in the experimentation section\n\nThank you for this constructive discussion. We believe that, as a result, these clarifications will significantly improve the paper.\n\n", " Please, next time, mark the revised parts and/or describe them if you revise your manuscript. Otherwise, it takes a lot of effort to find the revised parts and check them for correctness. \n\nW3) Your limitations and broader impact statement, as well as the computational statement, read quite well, but the limitations statement must be added to the main text. This is one of the essential parts and cannot be put into the supplement. I really recommend shortening your paper in general or putting one of the technical details in the appendix instead of the limitations statement. However, I disagree with the broader impact statement: \"for instance, dataset's labels are never used in this method, only the model's predictions. Without claiming to remove confirmation bias, the method focuses on what the model sees rather than what we expect the model to see.\" 1) the model is indeed trained with labels. So, yes, the labels are not directly used in your method, but the predictions heavily rely on the labels seen while training. 2) Hence, the model sees what the labels (i.e. the labelers' opinion) told the model to see during training.\n\nW1) Although it's indeed difficult to measure concepts' understandability, you then have to adjust your claims accordingly. Can you please add more and better understandable examples of the user study (Fig.13)?\n\nW2) Still, your claim in the end that you answer \"what\" doesn't fit the presented work. It is not explained why \"what\" is addressed especially compared to the previous explainability methods. Furthermore, I think it's at least arguable that previous methods only show \"where\" and not \"what\". Though it may be true that your method improves explaining the what, the presented way does not address this in a proper way. I again recommend working over this wording to align the story.\n\n It's still problematic that this work claims to produce human-interpretable (meaningful) explanations. Especially the last concept in each figure (S3-S5) shows that it's very likely that people have to guess the actual concept and will come up with a variety of different ones. The authors also \"complain\" about user studies being very expensive (in many ways) which is true. But I don't believe there is a cheaper way around. Explainability is subjective and normative. Measuring it is best done by asking people themselves. I recommend when working on XAI to focus on precise user studies. When users prefer sub-concepts over concepts that doesn't directly imply they are more understandable or meaningful. In order to claim that, this has to be asked in the user study as well: \"Please select the more meaningful conceptual explanations\".", " Thanks for your prompt reply and for raising our score, and we’re glad you appreciated our work and ideas. We understand your concerns, and quite frankly, we do agree that concept-based XAI as a whole lacks mathematically-grounded metrics for human understandability, and the dataset diversity is not as large as it should be. In fact, attribution-based explainability started out the same way: with most evaluations being qualitative and severe confirmation bias. With time, the need for a way to compare the different methods steered researchers to propose metrics for the different desiderata – e.g. insertion and deletion for faithfulness. We do hope that if our method gets some traction and gets picked up by other research teams, this need for a metric will push more people into this topic and eventually a solution will be found.\n\nAs for the lack of diversity of datasets, we simply chose to focus on more intensively testing on the dataset that other concept-based methods used, as showing results in one on which no other method was benchmarked wouldn’t allow for a fair comparison. Of course, this was a design choice whose optimality is debatable. We do intend to test it on other datasets to verify its validity, but we are confident that if it works on natural images, it should work in other (arguably easier) contexts\n\nWe sincerely thank you for this great discussion!", " I thank the reviewers for their effort. I'm happy they found some comments useful. I'm raising my score a bit as I really appreciated your work and ideas. **However, I still think the current manuscript is not ready for being accepted, yet.** The main reason is that the experimental section is too limited in terms of datasets and quantitative evaluations, which is a common concern I share with other reviewers (Rev-p9rf and Rev-B8eq).\n\nPlease consider the following are just my personal opinions and I'm more than happy to be convinced otherwise.\n\n> We have adopted the benchmarks and datasets of the papers doing automatic concept discovery [1,2], which focus mainly on ImageNet.\n\nI know the papers and understand the authors point of view. However, your argument is not sound i.e., it is an example of the \"appeal to authority\" logical fallacy. The reasons why I suggested to broaden your experimental section are:\n- it's really hard to argue that a method A works based on the results on 1 dataset only as there might be intrinsic biases in the given dataset—exploited by the method A—that do not hold on other datasets\n- limited experimental sections (as in ACE and other concept papers) are actually weakening this field\n\nI personally value novelty more than SOTA results and excessive experiments. However, assessing novelty is part of the game and for top tier venues my personal expectation is an assessment on more than 1 dataset.\n\n\n> We can hardly say that our evidence is mostly qualitative\n\nI understand that the volume of your tests is huge and your human experiments is extensive. However, as you pointed out, my main concern is the lack of a metric for mathematically measure the quality of your work. Testing on a lot of samples is good, but **without a mathematically grounded metric and diverse datasets is quite difficult to draw general and convincing conclusions**. Let me be clear: I liked your idea, but your claims are quite weak without a assessment thorugh a quantitative and mathematically grounded metric. Concerning the human experiments, I agree that your analysis of the outcomes is quantitative, but my concern is that the human evaluations are still subjective and qualitative: they are extremely useful for user testing (and related claims), but they cannot replace a mathematically grounded metric.\n", " - W4) The intro motivation is a bit weak. Explainability is essential, yes. However, as far as I understood, the GDPR’s “right to explanations” does not ask for an explanation this work is proposing. The GDPR is mainly about privacy rights, which means the “right to explanations” refers to the data a model is trained with, which your work does not really target. Instead, the proposed European AI act targets the “right to explanations” in your way.\n\nThe reviewer is right, we have corrected this statement in the main paper.\n\n- Q1.1) Suppose the decision is made based on these concepts. How exactly does your method differ and compare to prototype networks? To some extent, a concept seems similar to a prototype, especially as you show multiple prototypical (parts) examples when you want to answer the “what” question.\n\nIndeed, a concept seems similar to a prototype, but its difference resides in the fact that the former is extracted from a pre-trained network that can have any architecture – as long as its activations are non-negative in our case. For a model to generate prototypes, we are forced to train a whole new model with all that it implies (hyper-parameter tuning, additional expense due to training process, we might lose some performance due to the architecture choice, etc.). We are not saying that concept-based XAI is better in every aspect, just that it has its benefits over prototype networks (but also, its disadvantages, of course).\n\n- Q1.2) link to anonymous GitHub does not work\n\nWe apologize for the mishap, it should be fully available now (unchanged from submission on 8 May 2022). \nNB: the full stop at the end is not part of the URL.\n\n- Q1.3) What are KKT conditions?\n\nThe Karush-Kuhn-Tucker (KKT) conditions are a set of conditions fulfilled by the optimum of some convex constrained optimization problems. They can be used to characterize and find the optimum. We need them to apply the implicit differentiation theorem, since the dual variables carry the sensibility of the problem with respect to its input. The equation (10) in appendix C is explicitly built by stacking KKT conditions. We add a precision in the appendix to make this more clear.\n\n- L1) No real discussion on limitations. Neither from a technical nor from a societal view. Also no discussion on future work. That’s a big downside of this work, especially if you motivate your work with societal reasons, i.e. “the right to explanations” for end-users. What impact could your method have?\n\nIn order to focus on design choices, this section was not included in the main paper. We agree with this constructive remark, and added a section accordingly at the beginning of the appendix.\n\nWe thank the reviewer for the remarks on the form, we have revised the paper and its quality has greatly improved.\n\nBest regards,\n", " We are pleased that you liked our paper and found it clear.\n\nRegarding your questions:\n\n- W1) Overall, the experimental evaluation is weak. Many claims are only shown conceptually, especially comparing ACE and the failure cases. That is a big drawback. Also, the user study is very sparse. It shows that sub-concepts help to a small degree in finding an intruder. Attributing this finding to more meaningful concepts is a vast step. Also, no comparison of what other methods achieve is not given.\n\nThe qualitative nature of the evaluations is a failing of concept-based XAI, as there isn’t currently a metric to mathematically measure the understandability of concepts. However, we did thoroughly evaluate CRAFT’s faithfulness/fidelity through the deletion and insertion metrics on 100000 images for 5 different seeds (for a total of half a million images) and run experiments with human subjects (73 people, with 36 ML experts and 37 laymen) where we corroborate our hypothesis of human understandability of our concepts and sub-concepts. As a basis of comparison, in ACE, they poll 30 people and run their faithfulness experiments on 1000 images. The SOTA papers on this branch of XAI mostly rely on deletion and insertion metrics, and human experiments for quantitative analysis of their results, and hence, we do so as well.\n\nRegarding our user study, we did not design the intruder test to validate whether the sub-concepts are more meaningful than the concepts, but to determine their meaningfulness as concepts themselves. To compare the meaningfulness of concepts to sub-concepts, we used the binary choice test, where we show that laymen prefer the sub-concepts 74.95% of the time, and experts, 76.1%. We revised the way this was described to improve clarity.\n\nAs for the comparisons to other methods, adding another test to the ones we already do would have made the polls more complex, and given that they were voluntary submissions, we believe it would have had a negative impact on the amount of final submissions. For a comparison of concepts extracted through NMF to ACE (clustering), we refer the reviewer to [1].\n\n[1] Zhang, R., Madumal, P., Miller, T., Ehinger, K. A., & Rubinstein, B. I. (2021, May). Invertible concept-based explanations for cnn models with non-negative concept activation vectors. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 13, pp. 11682-11690).\n\n- W2) The work claims that the method rather answers what than where, but this question remains unanswered despite intuitively making sense. The idea is to not only highlight where a concept is attributed in the input but also what the concept looks like. However, still, it is hard to understand what the concept is. The shovel examples left me somewhat confused. I agree that this approach targets the “what” but only to a limited extent. Here, I am missing a discussion on this claim.\n\nWe argue that the limitation of current attribution methods is that they only provide location, whereas CRAFT goes further by providing information about the what through the lens of concept-based explainability: \"what concept does the model see here\". You are right in the sense that we also suffer from the same limitation as concept-based explainability: the “what” is answered to a limited extent. We have added a discussion on the subject in the limitations section (at the beginning of the supplementary material).\n\n- W3) In general, a fair discussion on limitations like computational expense, advantages, and disadvantages, in general, are missing. How efficient is this method? How computationally expensive is this method? If it only works with ReLUs, that is a significant drawback; what could be the following steps to work with other activation functions?\n\nThis is a good point. The computational cost of CRAFT is similar to ACE and the method can be run as long as a relatively large number of inferences are possible, e.g. we can generate the concepts of a class for a resnet50 in 1.000 forward passes, and hence, less than a RISE explanation for a single image.\n\nThe non-negativity constraints on the activation function arise from the NMF as our choice of activation factorization, which drastically improves the quality of extracted concepts[1]. However, CRAFT can be applied to models with layers with other non-negative activations (sigmoid, softplus, exponential, etc.), and future works include working with lower-bounded activations. This limitation has been discussed and this design choice has been kept as: NMF requires non-negativity (so any positive activation function would be compatible), which is compatible with most common architecture including Resnets, EfficientNet inceptions, SENets (non exhaustive list).\n\nIn any case, we have added the information of the computation cost to the supplementary material, and a section where we talk about the limitations we’ve identified to the beginning of the appendix. We thank you for the remark.", " - L4) Reorganize some sections, include missing information, or rephrase a few sentences to improve flow and clarity. For instance, in the abstract the authors do not mention any knowledge gap which makes a bit unclear why CRAFT might be needed. On the contrary, in the introduction the authors mention four knowledge gaps (lines 31, 41, 47, and 51) which makes a bit unclear what is the main purpose of CRAFT. Making the introduction and contributions more focused would significantly improve flow and clarity and it would save the authors some space for describing methods, results, and limitations.\n\nThank you for your suggestions, we have taken them into account, and the resulting document is much improved. \n\nConcerning your first 3 questions that we should improve during the rebuttal, thank you for pointing this out to us, we have corrected the typos and reworded the corresponding sentences.\n\n- Q4.1) Some claims may require to be rephrased as in their current form may not align with actual results. For instance, I find the possibility of using CRAFT for explaining complex failure cases quite exciting, but the only experimental result is on one cherry-picked example which may not be enough to prove the efficacy of the method (Section 4.2). \n\nThe last experiment of the paper proposes to tackle the problem of understanding complex failure cases in XAI which the current methods do not address. It is true that we have only shown some examples in the paper, and we are working to add some more to the supplementary material, but we have made the code available, which is easy to use so that everyone can test it.\n\nIt is also true that there is no benchmark for this sort of application at the moment. Nevertheless, the proposal of such a dataset is a separate work that we have not addressed in this paper, but that it’s much needed in the field of concept-based XAI. \n\n- Q4.2) A third example is when authors state that “CRAFT yields more interpretable concepts than ACE” (line 74). The main issue here is that ACE and CRAFT are different under many aspects. These differences make a fair comparison a bit tricky as each difference may introduce a confounding factor in the comparison. However, the results here are only qualitative (Figure 4 and appendix) which makes the claim a bit weaker than it could have been. Moreover, the experiment design does not allow to identify which ingredient (i.e. Sobol indeces? NMF? Or $\\tau$ ?) from ACE or CRAFT is the one that is actually making the difference. \n\nWe have not designed CRAFT as an iterative improvement over ACE, but as a new method as a whole, capable of generating both global and local explanations based on concepts inside the NN that are identified without supervision. Upon reading the original ACE paper, we remark that every design choice was made to furnish 25 databases of “concept images” to then leverage TCAV. Instead, we exploit the compositional nature of images by inputting crops into the network and then searching for these concepts inside the network without supervision and with a method that allows us to identify its provenance on the input later. We leverage the results of the experiments on human subjects in [1] to say that concepts generated through NMF are more meaningful to humans than those extracted using ACE, and the deletion and insertion metrics to demonstrate that our concept importance estimation (Sobol indices) is better than TCAV.\n\n[1] Zhang, R., Madumal, P., Miller, T., Ehinger, K. A., & Rubinstein, B. I. (2021, May). Invertible concept-based explanations for cnn models with non-negative concept activation vectors. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 13, pp. 11682-11690).\n\n- Q4.3) A fourth example is when authors state “These examples illustrate one of the weaknesses of ACE: the segmentation used can introduce biases through the baseline value used. Moreover, this segmentation does not facilitate the interpretation of the concepts” (line 276). However the evidence here is only based on a qualitative visual comparison which may not be enough to support such strong claims.\n\nWith regard to segmentation+baseline introducing a bias, we have added citations to back up the claim that the baseline value is a problem (which is a known problem in the XAI community). Regarding the second point, the reviewer is right and we have removed this sentence.\n\n Best regards,\n", " Thank you for reading our paper and taking an interest in it. We will address your concerns separately.\n\n- L1) Broaden the experimental section as it is currently based on one dataset only, thus questioning the generality of the results. I would strongly encourage the authors to broaden their experiments to include additional benchmarks commonly used in concept-based XAI papers (e.g. CUB [1], TabulaMuris [2], or CelebA [3]).\n\nWe have adopted the benchmarks and datasets of the papers doing automatic concept discovery [1,2], which focus mainly on ImageNet.\n\n[1] Ghorbani, A., Wexler, J., Zou, J. Y., & Kim, B. (2019). Towards automatic concept-based explanations. Advances in Neural Information Processing Systems, 32.\n[2] Zhang, R., Madumal, P., Miller, T., Ehinger, K. A., & Rubinstein, B. I. (2021, May). Invertible concept-based explanations for cnn models with non-negative concept activation vectors. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 13, pp. 11682-11690).\n\n- L2.1) Include a description for the limitations of their work. Including relevant limitations would significantly improve the paper as it would allow other scientists to understand how to use and improve CRAFT advancing the research field. \n\nWe have added a section on limitations and broader impact in the supplementary material which focuses on what we believe are ways to improve the method and how we hope it will impact the field of XAI. \n\n- L2.2) A possible limitation worth mentioning is that the experimental evidence is mostly qualitative (Section 4.1, 4.2, and 4.3) and/or possibly subject to confirmation or selection biases (Section 4.3).\n\nWe can hardly say that our evidence is mostly qualitative or that we are subject to confirmation or selection biases. Firstly, we have measured the faithfulness of the extracted concepts on more than 100000 images on 5 different seeds (so, half a million in total), thus verifying that concepts are indeed important to the model for predicting correctly and making sure that we are not suffering from confirmation bias. Secondly, we measure the understandability of our concepts through our human experiments on 73 people coming from different environments (ML experts and laymen), all pointing to the clarity of our extracted concepts, and the benefit of explaining using sub-concepts. This kind of experiment is quite time consuming to set up and analyze, and it’s not done in a lot of papers of the field. In ACE[1], they perform the same experiments (with 30 people vs 73, and they test deletion and insertion on 1000 images, instead of half a million). As for the other results, they are easy to reproduce with the code that we uploaded to the anonymous repository. \n\nWe do agree that the lack of a metric for mathematically comparing the understandability of concepts is a failing of the concept-based XAI field, and we will address it in future work.\n\n[1] Ghorbani, A., Wexler, J., Zou, J. Y., & Kim, B. (2019). Towards automatic concept-based explanations. Advances in Neural Information Processing Systems, 32.\n\n- L3) Include critical information which is currently missing for understanding and reproducing the experiments. For instance, the authors do not mention what error bars represent (e.g. Figure 6) nor how the images where selected for the human experiments i.e., they just mention that “the choice was randomized” (line 701), but they do not specify how (e.g. random selection from cherry-picked examples or from all images? From groups of image crops or from the original images?). Another issue is that it is not clear how the authors pick CRAFT hyperparameters (e.g. , line 263) and how sensitive the results are to hyperparameters’ changes.\n\nThe reviewer is right. We have added more details in section “Fidelity experiments” of the supplementary material. Error bars are obtained using multiple seeds (5) on 100,000 images each time and represent the standard deviation. Regarding the way the images were chosen for the human experiment: once the NMF is trained on random ImageNet image crops (for a given class), the crops with the highest $U_i$ (thus the most representative of the concept) were automatically chosen.\n\nAs to the choice of hyperparameters, we chose $r = 25$ to compare to ACE (which did so as well), but didn’t notice much difference when slightly changing this parameter, and the same can be said about the crop sizes. We randomly picked crops of $64 \\times 64$ pixels, but altering the size to, for instance, $50 \\times 50$ doesn’t have any real impact. We’re of course leaving out hyperparameter choices that would never work (e.g. crops of $5 \\times 5$ pixels, $r = 2$, etc.). Optimally choosing $r$ as the non-negative rank of the activations matrix is an NP-hard problem, and we leave some improvements on how we perform the factorization for future works.", " Thank you for taking an interest in our paper and for the interesting questions.\n\nWe will answer the questions sequentially.\n\n- Wa) Although the quality of a technical literature shouldn't be judged by the language quality, I still believe that there are many scopes to polish the language and take care of some typos to improve the readability of the paper.\n\nWe recognize that some typos slipped through to the final submission, that readability could have been improved, and we apologize for it. We have revised it and its quality has much improved.\n \n- Wb) I couldn't access the code related to the paper from the provided link. If by any chance the code was not ready for sharing by the submission time, it would have been better to skip that.\n\nWe apologize for the mishap, it should be fully available now (unchanged from submission on 8 May 2022). \nNB: the full stop at the end is not part of the URL.\n\n \n- L31: What do you mean by information methods?\n\nApologies, this has been fixed. We meant “Attribution methods”. \n\n- L73-74: If some figure in the supplementary section is referred to in the main paper, it is a good practice to clearly mention that. L137: I request the authors to explain the notation h_{l,k}, as l and k represent some layer and the total number of layers respectively, but it was not clear to me how they're used in a single notation. L163: Please write the full form when the short form is used for the first time in the paper. The full form of NNLS (i.e. Non Negative Least Square) should have been mentioned here.\n\nThank you. We relabeled figures in the supplementary information. Regarding $h_{l,k}$, it denotes the top-most part of the neural network, going from layer $l$ to the output/logits.\n\n- Other questions: a) Please explain the notations of Table 1 in detail. It is not very clear what 'n' represents here.\n\nFor the human psychophysical experiments, we separated our poll’s results into two groups: ML experts (for a total of 36 people), and laymen (37 people) – the $n$ representing the amount of people in each group. Then, we divided the table in two, one part for each task (intruder detection and binary choice between the concept and the sub-concept). In essence, our results prove (with more or less statistical significance) that users prefer subconcepts over concepts for understandability (binary choice test), and that they are capable of identifying the intruder when clusters are generated using our NMF factorization method (intruder test).\n\n- b) Here the concepts found by NMF are claimed to be human understandable. If any or some of the concepts are exceptions, I am interested to know whether these concepts are specific to a particular category?\n\nThis is a great question, which we plan to investigate in future work. Meanwhile, we refer the reviewer to the supplementary information where we show all the top most important concepts for each class tested (which are not cherry-picked but are the classes in the ImageNette subset), and seem to be understandable by humans according to our human experiments.\n\n- c) As the gradient based explanation generation techniques generally produce noisy saliency maps, did you see any such phenomenon for concepts? Do the heatmaps for all the images are as nice as shown in fig.1? If yes, which part of the method (related to concept attribution map generation) helped you to achieve that specifically?\n\nThank you for this excellent question. As a matter of fact, we did obtain noisy explanations at first when we got saliency maps to work. However, we then switched to GradCAM and started getting the smooth explanations that we showcase in the paper. It would be interesting to see if this problem persists for other attribution methods when the chosen layer is an early layer in the NN.\nWe have also added a limitations and broader impact section to the beginning of the supplementary material.\n\nBest regards,\n", " We are pleased that you found our paper interesting and liked the way it’s organized.\n\nWe will address each question separately.\n\n- W1.1) How to verify the faithfulness of the discovered concepts? \n\nWe report improvements using the two main faithfulness measures used in the field: Deletion and insertion. A lower deletion indicates better fidelity, as does a higher insertion. As shown in Fig.6 and Fig.14, we improve the faithfulness of the discovered concepts by using Sobol indices with respect to those extracted with TCAV importance. \n \n- W1.2) What if the human-understandable concepts are not the correct reasoning logic for the original neural networks.\n\nWe make sure that concepts follow the model’s logic by testing their faithfulness through the Deletion and Insertion scores. If the concepts for a given class were not to explain the model, it would instantly reflect negatively in these metrics.\n \n- W2) How to evaluate the scalability to larger classes?\n\nWe assume the reviewer refers to the size of the images. The proposed method does not require more forward passes than SOTA explainability methods such as RISE hence scalability is not really an issue here. Furthermore, our experiments on ImageNet were performed on a single GPU (V100) for a ResNet50. \n \n- W3) Are there quantitative results to show the performance and compare it with other baselines?\n\nFor the Sobol ingredient, we refer the reviewer to Fig.6 and Fig.14 demonstrating that the proposed approach yields a much better fidelity score than SOTA method TCAV (ACE) irrespective of the factorisation method used (PCA, NMF, RCA, ICA). For the quality of the extracted concepts, we leverage the extensive human experiments in [1] that show the improvements in terms of human understandability over clustering (as done in ACE), and our own to show the preference of users for sub-concepts over concepts. Unfortunately, there is no existing mathematical metric to measure this objectively, and thus, to compare quantitatively to other concept extraction techniques directly. We added a discussion on this topic on the limitations section (at the beginning of the appendix).\n [1] Zhang, R., Madumal, P., Miller, T., Ehinger, K. A., & Rubinstein, B. I. (2021, May). Invertible concept-based explanations for cnn models with non-negative concept activation vectors. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 13, pp. 11682-11690).\n\n- Q1) How could we use the explanation to explain the error cases made by the model?\n\nWe refer the reviewer to section 4.2. We are working to add more examples of this use-case to the supplementary material.\n\n- Q2) It would be good to compare with a related paper: \"A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts\" which also provides both global and local concept-level explanations.\n\nWe weren’t familiar with this paper at the time of writing, but we found it quite interesting, especially its capacity to generate contrastive explanations (‘why not’). However, their method proposes to learn a model on top of the extracted concepts, while ours integrates everything seamlessly into the concept extraction block – i.e. building a graph to link concept vs backpropagating through the concept extraction to locate them. We think they might work in complement, for instance by extracting concepts using the NMF and then learning the graph with them instead of ACE, thus having the best of both worlds.\n\nBest regards,\n", " This paper proposed a concept-based explainability method, CRAFT, to generate both global and local concept-based explanations.\nThis paper introduces some key contributions: (1) recursively decomposing higher-level concepts into more elementary ones.\n(2) concept importance estimation based on Sobol's indices. (3) Concept Attribution Maps, which connect global concepts to local image level explanation. **Strengths**\n\n(1) Concept-level explanation is a challenge and an important direction. The proposed method is useful and novel, especially for the recursive decomposition to get different level concepts and the linking between global high-level concepts and local attribution map.\n\n(2) The three ingredients based on the concept activation factorization are promising and inspiring.\n\n(3) The paper is well organized and easy to follow.\n\n**Weaknesses**\n\n(1) How to verify the faithfulness of the discovered concepts? What if the human-understandable concepts are not the correct reasoning logic for the original neural networks.\n\n(2) How to evaluate the scalability to larger classes? \n\n(3) Are there quantitative results to show the performance and compare it with other baselines?\n\n==========================Pose Rebuttal=============================\n\nThanks for your feedback. I will keep my original score.\n (1) How could we use the explanation to explain the error cases made by the model?\n\n(2) It would be good to compare with a related paper: \"A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts\" which also provides both global and local concept-level explanations. More quantitative experiments to show the performance and compare it with other baselines would be better.\n", " The authors have done a nice job by improving the concept based explanation generation techniques from different perspectives. An automatic concept extraction followed by a recursive procedure towards achieving a set of concepts and sub-concepts are proposed in this work. A concept-based heatmap generation method is also proposed here that backpropagate concept values to pixel space that is very similar to widely used class-based saliency methods. Strengths: \na) Several important directions are worked on here, that start with concept generation in a recursive manner from multiple layers of the network. This technique helps the cases where concepts generated from a very deep layer don't make much sense and decomposing them with sub-concepts can develop an improved understanding of the model. \n\nb) This is the first method, I believe, to work on saliency based explanations for each concept. The authors have shown the effectiveness of this approach by explaining different concepts in a single image. \n\n\nWeaknesses: \na) Although the quality of a technical literature shouldn't be judged by the language quality, I still believe that there are many scopes to polish the language and take care of some typos to improve the readability of the paper.\n\nb) I couldn't access the code related to the paper from the provided link. If by any chance the code was not ready for sharing by the submission time, it would have been better to skip that. L31: What do you mean by information methods?\n\nL73-74: If some figure in the supplementary section is referred to in the main paper, it is a good practice to clearly mention that.\n\nL137: I request the authors to explain the notation h_{l,k}, as l and k represent some layer and the total number of layers respectively, but it was not clear to me how they're used in a single notation.\n\nL163: Please write the full form when the short form is used for the first time in the paper. The full form of NNLS (i.e. Non Negative Least Square) should have been mentioned here.\n\nOther questions: \na) Please explain the notations of Table 1 in detail. It is not very clear what 'n' represents here.\n\nb) Here the concepts found by NMF are claimed to be human understandable. If any or some of the concepts are exceptions, I am interested to know whether these concepts are specific to a particular category?\n\nc) As the gradient based explanation generation techniques generally produce noisy saliency maps, did you see any such phenomenon for concepts? Do the heatmaps for all the images are as nice as shown in fig.1? If yes, which part of the method (related to concept attribution map generation) helped you to achieve that specifically? The authors did not add any section to address the limitations and potential negative societal impact of this work.", " In this paper the authors propose the “Concept Recursive Activation FacTorization” (CRAFT), a framework to generate concept-based explanations from trained networks. CRAFT aims at addressing current limitations in state-of-the-art concept-based techniques (e.g. ACE [1]) including: (i) the association of *one concept only* for each image segment/patch, (ii) the instability/noise of gradients w.r.t. concept vectors, (iii) the localization of concepts in the input space to provide local explanations. To address these limitations, the authors equip CRAFT with three methods: (i) a recursive procedure to automatically decompose concepts into sub-concepts, (ii) an alternative scoring function for concept importance based on Sobol indeces, and (iii) a method to localize concepts in the input (pixel) space. Following Zhang et al. [2], CRAFT identifies coherent bases of concepts using the Non-negative Matrix Factorization (NMF) rather than k-Means as in ACE [1]. In the experiments the authors show that CRAFT can: (i) identify meaningful and coherent concepts, (ii) localize concepts in the input space allowing users to explain misclassified images, (iii) decompose high-level concepts into coherent subconcepts, and (iv) provide a more robust ranking concept w.r.t. TCAV in terms of fidelity scores.\n\n[1] Ghorbani, Amirata, et al. \"Towards automatic concept-based explanations.\" Advances in Neural Information Processing Systems 32 (2019).\n\n[2] Zhang, Ruihan, et al. \"Invertible concept-based explanations for cnn models with non-negative concept activation vectors.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 13. 2021. # Strengths\nThe automatic decomposition of high-level concepts into sub-concepts is the most interesting contribution of this paper w.r.t. existing concept-based approaches. This is a significant step forward w.r.t. state-of-the-art approaches such as ACE as it enables the identification of coherent hierarchies of concepts. In particular concept hierarchies may facilitate human experts in navigating the concept space. Even though the methods are not new (e.g. NMF, or Sobol indeces), they are effectively used in new contexts. In addition, the experiments show how CRAFT can be used in practice as a powerful tool to “debug” misclassified samples by localize concepts in the input space.\n\n# Limitations\nWhile this paper introduces a few interesting steps forward, the authors may consider a few suggestions for improving their contributions. In particular the authors may consider to:\n1. Broaden the experimental section as it is currently based on one dataset only, thus questioning the generality of the results. I would strongly encourage the authors to broaden their experiments to include additional benchmarks commonly used in concept-based XAI papers (e.g. CUB [1], TabulaMuris [2], or CelebA [3]).\n2. Include a description for the limitations of their work. Including relevant limitations would significantly improve the paper as it would allow other scientists to understand how to use and improve CRAFT advancing the research field. A possible limitation worth mentioning is that the experimental evidence is mostly qualitative (Section 4.1, 4.2, and 4.3) and/or possibly subject to confirmation or selection biases (Section 4.3).\n3. Include critical information which is currently missing for understanding and reproducing the experiments. For instance, the authors do not mention what error bars represent (e.g. Figure 6) nor how the images where selected for the human experiments i.e., they just mention that “the choice was randomized” (line 701), but they do not specify how (e.g. random selection from cherry-picked examples or from all images? From groups of image crops or from the original images?). Another issue is that it is not clear how the authors pick CRAFT hyperparameters (e.g. $\\tau$, line 263) and how sensitive the results are to hyperparameters’ changes.\n4. Reorganize some sections, include missing information, or rephrase a few sentences to improve flow and clarity. For instance, in the abstract the authors do not mention any knowledge gap which makes a bit unclear why CRAFT might be needed. On the contrary, in the introduction the authors mention four knowledge gaps (lines 31, 41, 47, and 51) which makes a bit unclear what is the *main* purpose of CRAFT. Making the introduction and contributions more focused would significantly improve flow and clarity and it would save the authors some space for describing methods, results, and limitations.\n\n[1] Wah, Catherine, et al. \"The caltech-ucsd birds-200-2011 dataset.\" (2011).\n\n[2] Cao, Kaidi, Maria Brbic, and Jure Leskovec. \"Concept learners for few-shot learning.\" arXiv preprint arXiv:2007.07375 (2020).\n\n[3] Liu, Ziwei, et al. \"Large-scale celebfaces attributes (celeba) dataset.\" Retrieved August 15.2018 (2018): 11. Addressing the main limitations mentioned above would significantly improve the paper and, as a consequence, reviewers’ feedback and scores. However, I understand that the authors may not have enough time to address them all in the rebuttal. For this reason I list here a few minor items which can give the authors a better chance to improve their work in the short term:\n1. A few sentences may need further clarification as they may contain typos or have grammar issues (e.g. ““The second concept elucidated by craft is located on the astronaut’s pants that **he confuse** with the ski suits” (Figure 5)). The authors may also improve the flow using the active voice more frequently (e.g. “Namely, in [38] and [39], explanations are generated as the inputs” (line 97)). Other sentences which may benefit from rephrasing include: “Once the factorization done, and the reconstruction of the activation of the image can then be interpreted as a combination of a set of concepts and a coefficient associated to these concepts.” (line 116), “A common concern with concept extraction methods is that make sense to humans does not mean it” (line 189), “Finally, we can capture the importance that a concept might have as a main effect – by itself – as well 210 as by interacting with other concepts on the model’s output by calculating the expected variance that 211 would remain if all the indices of the masks except the Mi were to be fixed.” (line 209). \n2. A few acronyms need to be spelled out (e.g. TCAV, line 112).\n3. The authors reference Figure 3 before Figure 2. I would suggest to swap the figures.\n4. Some claims may require to be rephrased as in their current form may not align with actual results. For instance, I find the possibility of using CRAFT for explaining complex failure cases quite exciting, but the only experimental result is on one cherry-picked example which may not be enough to prove the efficacy of the method (Section 4.2). Another example is when authors state “We use our approach and combine local and global explanation, to *accurately* explain predictions” (line 72). Here I was expecting a *quantitative* result in the experimental section showing the actual *accuracy* scores of CRAFT’s explanations. A third example is when authors state that “CRAFT yields more interpretable concepts than ACE” (line 74). The main issue here is that ACE and CRAFT are different under many aspects. These differences make a fair comparison a bit tricky as each difference may introduce a confounding factor in the comparison. However, the results here are only qualitative (Figure 4 and appendix) which makes the claim a bit weaker than it could have been. Moreover, the experiment design does not allow to identify which ingredient (i.e. Sobol indeces? NMF? or $\\tau$?) from ACE or CRAFT is the one that is actually making the difference. A fourth example is when authors state “These examples illustrate one of the weaknesses of ACE: the segmentation used can introduce biases through the baseline value used. Moreover, this segmentation does not facilitate the interpretation of the concepts” (line 276). However the evidence here is only based on a qualitative visual comparison which may not be enough to support such strong claims. As mentioned before, the authors may consider to include a description for the limitations of their work. Including relevant limitations would significantly improve the paper as it would allow other scientists to understand how to use and improve CRAFT advancing the research field. A possible limitation worth mentioning is that the experimental evidence is mostly qualitative (Section 4.1, 4.2, and 4.3) and/or possibly subject to confirmation or selection biases (Section 4.3).", " This work presents a novel method to provide local and global explanations for the decision process of neural convolutional models. Their approach tries to divide agglomerated concepts from later layers into disentangled sub-concepts in earlier layers to provide more human-comprehensible explanations/ concepts. For this purpose, they make use of three “ingredients”, namely 1) a reiteration to find early-stage concepts that are human-comprehensible, 2) an approach from the variance-based sensitivity analysis, the Sobol indices, to estimate concept importance in the input for the prediction and 3) a method to provide an attribution map through implicit differentiation for the found concepts such that we know where the concept is contained in the input. Experiments, including some ablations studies and a user study, are provided to prove the function of the proposed method. **Strengths**:\n- S1 Using the NMF to find promising concepts is a good idea, especially to only find sparse and meaningful concepts. Using Sobol indices for feature importance seems sensible. Also, combining the three ingredients to solve this task is exciting and overall yields a novel approach to explainability. This work presents a multitude of solid mathematical solutions for practical problems in this task.\n- S2 The figures used to conceptually show the function of their method is very good and intuitive.\n- S3 The story is in general clear.\n\n**Weaknesses**:\n- W1 Overall, the experimental evaluation is weak. Many claims are only shown conceptually, especially comparing ACE and the failure cases. That is a big drawback. Also, the user study is very sparse. It shows that sub-concepts help to a small degree in finding an intruder. Attributing this finding to more meaningful concepts is a vast step. Also, no comparison of what other methods achieve is not given.\n- W2 The work claims that the method rather answers what than where, but this question remains unanswered despite intuitively making sense. The idea is to not only highlight where a concept is attributed in the input but also what the concept looks like. However, still, it is hard to understand what the concept is. The shovel examples left me somewhat confused. I agree that this approach targets the “what” but only to a limited extent. Here, I am missing a discussion on this claim.\n- W3 In general, a fair discussion on limitations like computational expense, advantages, and disadvantages, in general, are missing. How efficient is this method? How computationally expensive is this method? If it only works with ReLUs, that is a significant drawback; what could be the following steps to work with other activation functions?\n- W4 The intro motivation is a bit weak. Explainability is essential, yes. However, as far as I understood, the GDPR’s “right to explanations” does not ask for an explanation this work is proposing. The GDPR is mainly about privacy rights, which means the “right to explanations” refers to the data a model is trained with, which your work does not really target. Instead, the proposed European AI act targets the “right to explanations” in your way.\n\nBased on the exciting idea I tend to accept this paper. However, there are concerns about the evaluation. If the authors address these concerns, e.g. on the missing limitation discussion, I am happy to change my score. Suppose the decision is made based on these concepts. How exactly does your method differ and compare to prototype networks? To some extent, a concept seems similar to a prototype, especially as you show multiple prototypical (parts) examples when you want to answer the “what” question. \n\n\nFurther remarks not affecting the evaluation:\n\n- link to anonymous GitHub does not work\n- Line 64 meanignful -> meaningful\n- Line 105 vectorin -> vector in\n- In general, quite some typos, capitalization, and punctuation errors\n- Please incorporate fig 7 in your main text if you refer to it multiple times\n- The appendix is a bit messy; many typos and references to figures are missing, etc.\n- Line 245 What are KKT conditions? No real discussion on limitations. Neither from a technical nor from a societal view. Also no discussion on future work. That’s a big downside of this work, especially if you motivate your work with societal reasons, i.e. “the right to explanations” for end-users. What impact could your method have?" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "_r9C5lzxD-", "iJfN4-vycaw", "L4824TFfUi7", "FIi5XI345ht", "hw7neznBt10", "tgD-EtHK4dS", "FIi5XI345ht", "anXUsoZmW48", "Yz0llkIbY1R", "RWgq8DnvFU7", "nips_2022_8FuITQn6rG3", "nips_2022_8FuITQn6rG3", "nips_2022_8FuITQn6rG3", "nips_2022_8FuITQn6rG3" ]
nips_2022_MAMOi89bOL
Masked Autoencoders that Listen
This paper studies a simple extension of image-based Masked Autoencoders (MAE) to self-supervised representation learning from audio spectrograms. Following the Transformer encoder-decoder design in MAE, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder, as audio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target datasets. Empirically, Audio-MAE sets new state-of-the-art performance on six audio and speech classification tasks, outperforming other recent models that use external supervised pre-training. Our code and models is available at https://github.com/facebookresearch/AudioMAE.
Accept
The paper has two strong accepts and two borderline reject reviews. However, as one of the reviewers did not engage with the authors post-rebuttal, I had to interpret the authors' response to the reviewer's concerns, and they seem to properly address them (even including a new experiment into the paper). The work seems to have been executed concurrently with other similar approaches, and while not entirely novel, the paper seems to include in-depth experiments and a discussion that can be beneficial to the research community.
train
[ "h8-EEjr5i2E", "KO3PBegHYN", "uC8u4oPVkyA", "gPWLqOJjIpe", "wOvG1qAOXGk", "NL8aEppZrF0", "V7uc8lj2cMs", "I3ou_oa1n6D", "6uwqr9kq9Yv", "E2g69DcD36z", "vr0DBQhJoK_", "gladqu-Tt-F", "flF80iOB5PD", "FNNiH84ZhJ", "EVxEmlFp7x" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the response! We are glad that most of your concerns have been properly addressed. We will experiment and include the speaker verification experiment following the suggested protocol in VoxSRC.", " Thanks a lot for the explanation! Yes so far I have no problem with the comments from the authors except one thing: in terms of VoxCeleb, it would be good to follow the standard VoxSRC protocol from recent works on speaker verification. Speaker identification in my personal honest opinion is not a good metric for VoxCeleb due to its trial design.", " Dear reviewer, could you please address the rebuttal by the authors? The reviewer-AC discussion period starts tomorrow and comments will be closed to authors so they will not be able to address additional comments.\n", " Dear reviewer, could you please address the rebuttal by the authors? The reviewer-AC discussion period starts tomorrow and comments will be closed to authors so they will not be able to address additional comments.\n", " Dear reviewer, could you please address the rebuttal by the authors? The reviewer-AC discussion period starts tomorrow and comments will be closed to authors so they will not be able to address additional comments.\n", " Dear reviewer, could you please address the rebuttal by the authors? The reviewer-AC discussion period starts tomorrow and comments will be closed to authors so they will not be able to address additional comments.", " Thank you for reviewing our work. We will extend related work with the additional page in the final version. For your questions:\n\n**Q1** “Conceptually simple\" approach?\n\nPlease note that conceptually simple works such as seq2seq [r3] have a long history of acceptance at top venues such as NeurIPS. We hope that simplicity is a strength, facilitating wide adoption within the community. Our Audio-MAE is a simple-yet-effective approach for self-supervised learning from audio. Further, instead of naively consuming spectrograms, our model also addresses the unique properties of sound which leads to improved performance. More broadly, we think that novelty can include technical innovation (e.g. our shifted local-attention decoders), as well as novel insights, ablations, and qualitative observations (e.g. our audible results). We are thankful that many of these properties are recognized by the reviewers. \n\n\n**Q2** \"Unstructured\" masking over rigid spectrogram patches?\n\nThis is a great question. For unstructured/structured masking, we mean applying Bernoulli dropout/masking to the transformer token-level sequence (unstructured) or explicitly masking tokens along time/frequency axes (structured). The basic unit for masking is one token, that corresponds to the embedding of a 16x16 ‘pixel’ patch in the input spectrogram. We will revise this in l.106 to make it more clear.\n\nFollowing MAE, we use 16x16 patches as the smallest units and operate over their embeddings. This 16x16 patch setup is adopted by all the other transformer-based baselines in Table 2 as well (e.g. AST[10], SS-AST[18], MBT[11]). Compared to 16x16 patches, the major concern of using 1x1 “pixels” is the computation cost. Even with 80% masking, the input sequence length $N$ to the Transformer will be 102 with 16x16 patches and 26214 with 1x1 “pixels”, where the latter is too expensive (complexity is proportional to $O(N^2))$.\n\nWe investigated various patch size configurations in Table 1a. Compared to 16x16 size (row1 47.3 mAP on AS-2M), finer and overlapping patches with 10x10 stride resulted in ~2.7x increased computation cost but results (row2 47.3) are similar.\n\nFollowing the reviewers’ suggestion, we additionally conduct an experiment with more fine-grained 8x8 patches, which increases the computational cost by ~3.6x, to 175G FLOPs, and is the finest patch encoding we can fit into V100 GPU memory. The result for our Audio-MAE converged to 47.3 and we will include this in Table1a of the main paper. We thank the reviewer for this suggestion.\n\n[r3] I. Sutskever et al. “Sequence to sequence learning with neural networks.” NIPS 2014.", " **Q3** Local correlation in image/sound?\n\nWe are grateful for reviewer NJ9B’s insight. We are aware that local correlation is also important for images when considering at the patch/sub-patch level (e.g. textures, continuity across patch boundaries). As we fully agree with reviewer NJ9B’s perspective that images can be very local; here is why we thought images could also be less local at a different level. We were advocating this in the paper and want to clarify why we think so. We observe that image patch distribution/correlation at the broader/extended level (e.g. whole image) plays a less significant role for determining image semantics (e.g. the size/position of a bird in an image may not be important; a \"sky patch\" could be anywhere on the upper part of an image). \n\nWe consider local correlations are also important in spectrograms. At the contiguous/sub-patch level, human hearing perceptions are sensitive to defect/discontinuity in spectrograms, especially in 3-5KHz [60]. At the broader/extended level, a sound's pattern, patch distribution, and exact position in the spectrogram directly affects how it sounds and affects its semantics. To this end, we design Audio-MAE to better address local correlation of spectrogram patches for self-supervised learning from audio. \n\nIn Audio-MAE, we promote a more locally-focused reconstruction task as the pretext task, where distant (in time and frequency) spectrogram context are less influential for reconstruction. As an intuitive example, along the time axis, spectrogram patches of speech are locally correlated and are less related to distant (in time) patches. Along the frequency axis, patches of frictional sounds in consonants (4-8KHz) or harmonics in formants (<2KHz) are locally correlated across several neighboring patches and are less related on distant (in frequency) patches. We achieve this by incorporating shifted local attention windows. In our experiment this mechanism qualitatively (Fig.5) and quantitatively (Table2) yields improved results. \n\nThank you for pointing out strong local correlation in images and we will revise l.57 to make it more clear in the paper.\n\n\n**Q4** Masking analysis of the redundancy in sound nature?\n\nThanks for the insightful suggestion. We conducted a new experiment to analyze how masking affects classification on different sound types. In the following table, we vary the masking ratio in self-supervised pre-training and show the average precision (AP) for different sound types after fine-tuning on AudioSet.\n\n|mask rate|speech|music|aircraft|env. noise|\n|-|-|-|-|-| \n|30%|82.1|77.1|55.8|49.4| \n|50%|82.9|78.6|56.1|49.6|\n|70%|83.3|80.5|55.9|49.7|\n\nInterestingly, sound types with comparably more harmonics or redundancy (e.g. music, +2.4) benefited more from increased masking ratio in Audio-MAE, while noise-like/high-entropy classes (e.g. env. noise, +0.3) do not. We will include more per-class analysis in the appendix to gain more insights for masking in Audio-MAE. This experimental insight opens up a new hypothesis that we would like to explore: Masking ratios could be adaptive w.r.t. the redundancy of the data (e.g. music shows a clear gain for higher masking ratios). We thank the reviewer for this suggestion.\n\n\nWe hope our responses have covered and addressed your concerns. We are available and open to address any outstanding issues. Thank you for your insightful comments.\n\n\n", " Thank you for reviewing our work and the constructive feedback. \n\n**Q1** Being upfront with concurrent work: \n\nOur original manuscript included MaskSpec[42] as concurrent work (uploaded to arXiv on 4/27, 3 weeks before NeurIPS deadline), to be upfront with any existing work. Please note that it should not be considered as prior work, since it is less than the 2-month period in [NeurIPS-FAQ](https://neurips.cc/Conferences/2022/PaperInformation/NeurIPS-FAQ). Similarly for [38] which was posted 3/31 on arXiv, less than 2 months before NeurIPS deadline. We hope our submission will not be penalized by the concurrent work. We have added [42] as concurrent work into our related work and comparisons to be upfront with it.\n\nFollowing the reviewer’s suggestion, we emphasize some contributions: \n1) Audio-MAE is different (e.g., applying local attention with shifted windows). \n2) Audio-MAE's performance is much better (e.g., 37.1 vs 32.3 mAP in AS-20K, 98.3 vs 97.7 acc on SPC). \n3) Our work systematically investigates the impact of incorporating self-supervised and supervised ImageNet transfer for audio pre-training. \n4) Our work presents comprehensive qualitative results and audible insights (in Fig. 6 and Fig. 1 in the supplementary) that we hope are useful for the community. \n\n**Q2** Cons of transferring from ImageNet\n\nThe main drawback of ImageNet transfer is the reduced accuracy of our approach, shown in the experiments in Table 1h and discussed in l.284-297. \n\nCompared to audio-only pre-training on AudioSet, transferring from models pre-trained on out-of-domain ImageNet data degrades accuracy (row4, 47.3 $\\rightarrow$ 47.1(IN SSL)). Transferring from models trained with ImageNet labels further degrades accuracy (row5, 47.3 $\\rightarrow$ 46.9 (IN-SSL+IN-SL). We term this “label bias” in l.32, where we mean performance degradation due to domain shift and heterogeneity between visual and audio classes. \n\nWe are not against ImageNet pre-training per se since ImageNet pre-training could achieve faster convergence as shown in [r1-r2]. But our results indicate that, given our setup in Audio-MAE, transferring models trained with ImageNet data (visual objects) and labels (cats, dogs, etc.) to audio models does not lead to improvements compared to directly learning from large-scale audio data alone (e.g. AudioSet). Nevertheless, we agree that there are some benefits of ImageNet pre-training, e.g. many models are available. \n\n**Q3** Computation complexity is worse in pre-training and fine-tuning/feature extraction?\n\nFor comparing runtime across different systems and infrastructures, there are many factors that could impact the comparison (e.g. number of training epochs, length of each epoch, regularization, data fetching, performing video decoding or not, etc). For comparing to other works, we measure complexity with FLOPs as these are a hardware-independent measure of complexity, specified in Table 1a. \n\nIn pre-training, Audio-MAE is far more efficient than the main baseline SS-AST [10] since 80% of spectrogram patches are dropped before encoding in Audio-MAE. Audio-MAE's FLOPs and parameters are close to concurrent MaskSpec [42] (both with ViT-B or ViT-S as backbone) & Audio-MAE is more accurate, e.g., in AS-20K, 37.1 vs 32.3 mAP with ViT-B and 32.1 vs 28.9 with ViT-S. \n\nFine-tuning or feature extraction is based on a VIT-B transformer where the FLOPs (48.6G) and parameters (86M) are identical to other transformer baselines. Please note that, given similar or better (we adopt masking at fine-tung hence sequence length is shorter than baselines) computation complexity, Audio-MAE achieves state-of-the-art performance in all the tasks. For example, acc on SPC: Audio-MAE: 98.3; SS-AST: 98.0; MaskSpc: 97.7; AST: 98.1. \n\nWe thank the reviewer for pointing out the importance of comparing computational complexities and will add them to the final paper. \n\n\n**Q4** The large computational requirements compared to ImageNet transfer learning, with little improvements?\n\nFor a fair comparison of transferring/pre-training costs, we think that ImageNet (IN) pre-training cost should also be included. For example, Image-MAE trains on 1.2M IN images for 1600 epochs, while Audio-MAE trains on 2M audio for 32 epochs. Consequently, if counting the full training cost, Audio-MAE-PT is more effective and achieves better performance, e.g. 47.3 mAP (AS-SSL+AS-FT) vs 45.4 (IN-SSL+AS-FT).\n\nFurther, the proposed Audio-MAE (e.g., 37.1 mAP on AS-20K) significantly outperforms other models with IN-PT (e.g. AST (34.7), MBT (31.3)) or AS-PT (e.g. SS-AST (31.0), MaskSpec(32.3)). We nevertheless agree that ImageNet pre-training has its own merits as one can simply use off-the-shelf models provided by the community. We will make this clear in the paper. \n\n[r1] K. He et al, “Rethinking ImageNet pretraining,” CVPR 2019\n\n[r2] J. Li et al, “Audio AudioTagging Done Right,” arXiv.2203.13448\n\nThank you and we are available to address any outstanding issues.", " We thank the reviewer for pointing out one of our important findings in Table 1h that compared to audio-only pre-training, the additional out-of-domain ImageNet pre-training (IN-PT) relatively degrades fine-tuning performance for audio tasks (47.3 $\\rightarrow$ 46.9 mAP on AS-2M).\n\n**Q1** Why does ImageNet-PT no longer help?\n\nCompared to randomly initializing audio models from scratch, previous work on audio models (e.g. AST[10], MBT[11]) leverage models supervisedly pre-trained on ImageNet-1K or ImageNet-21K. The underlying assumption is that the patterns of visual objects to some extent resemble the patterns of spectrograms, and the knowledge to classify IN classes (dogs, cats) can be transferred to audio events (music, speech). Empirically they showed that using out-of-domain IN-PT resulted in faster convergence and better performance compared to random (“from-scratch”) initialization after fine-tuning for audio tasks. Some audio-only pre-training models have been proposed (e.g. Conformer[37], SS-AST[18]) to replace IN-PT yet still lag in performance, especially in the challenging AudioSet tasks.\n\nOur Audio-MAE is the first audio-only pre-training work that achieves state-of-the-art performance on AudioSet tasks. In this work we systematically study and compare the impact of using out-of-domain ImageNet for pretraining (i.e., self-supervised IN-PT with IN data and supervised IN-PT using IN labels). We show that the audio-only setup is sufficient to achieve the best performance (47.3 mAP) in Table 1h, outperforming other baseline with IN-PT in Table 2. Using only audio information, Audio-MAE does not rely on cross-modal knowledge transfer, i.e., pattern and class similarity between visual objects and audio spectrograms, as in prior work. In fact, incorporating self-supervised IN-PT MAE degrades performance (47.3 $\\rightarrow$ 47.1) and incorporating ImageNet labels in finetuned MAE representations degrades accuracy further (47.3 $\\rightarrow$ 46.9 in Table 1h).\n\nIn short, our results suggest that image-to-audio transfer is suboptimal compared to using true audio spectrograms for pre-training, under the Audio-MAE setup. In previous works, initialization with IN-PT was a useful approach compared to from-scratch initialization (i.e., IN-PT > AS-PT > from-scratch). This is not helpful for Audio-MAE, where audio-only training is sufficient and achieves better performance (i.e., AS-PT > IN-PT > from-scratch). \n\nWe hope our responses have covered and addressed your concerns. We are available and open to address any outstanding issues. Thank you for your insightful comments.\n\n", " We thank you for your time and effort to review our manuscript, and appreciate the positive feedback! We will include additional references and improve the writing quality in the final version. Regarding your questions:\n\n**Q1** Why not use linear spectrograms?\n\nThanks for the suggestion. We have experimented with linear spectrograms and found that Mel spectrograms yield better performance (e.g. 37.3 vs 36.8 mAP on AS-20K). Presumably this is because the Mel filter banks align better with human hearing perception which in turn facilitates machine perception. Further, Mel spectrograms are slightly more efficient in computation (25ms window. 64x8 (Mel) vs 64x13 (linear) spectrogram tokens (smaller is more efficient) for 10-sec audio under 16K sampling rate).\n\n\n**Q2** Comparison with supervised models?\n\nThis is a good suggestion and we can show more comparisons in the final version. For now, we compare Audio-MAE to other models with ImageNet supervised pre-training (IN-PT) in the bottom group of Table 2. Audio-MAE significantly outperforms these models, e.g., 37.1 mAP (Audio-MAE) vs 34.7 (AST[10]) and 31.3 (MBT[11]) on AS-20K. In Table 1h, we also ablate Audio-MAE with its own variants initialized with supervised ImageNet pretraining where we show that this is not beneficial over self-supervised audio-only pre-training for our setup.\n\n\n**Q3** Why choose AudioSet for pre-training?\n\nWe use AudioSet since: 1) It is a large and diverse audio collection. 2) We would like to set a fair comparison with the baselines. In more detail: \n\nFirstly, AudioSet is large (2M audio clips) and covers a wide range of sound types (~40% of audio clips are speech and the rest are event sounds or music). We leverage AudioSet for pre-training with a hope that the pre-trained model can generalize to various downstream tasks (e.g., audio and speech classification tasks). Secondly, AudioSet is the standard benchmark for audio event detection and studied by other baselines listed in Table 2. We would like to set a fair comparison with them.\n\n\n**Q4** Random masking technique?\n\nThe masks are generated randomly as in image-MAE[1], on the token level. \n\n- For unstructured masking, we randomly sample a $p$ portion of all tokens (each token corresponds to one spectrogram patch) and mask the sampled ones (Fig. 2b). For every token, this can be regarded as a Bernoulli process with a ratio *p* being masked. \n\n- For structured masking, we randomly sample a $p$ portion of time frame indexes or frequency bands indexes then mask them. For time frames (e.g. the highlighted vertical stripes in Fig. 2c) or frequency bands (e.g. the highlighted horizontal stripes in Fig. 2d), this can be regarded as Bernoulli processes with a ratio $p$ being masked. \n\nWe experimented with different masking types and ratios in Fig. 4 to find the best strategy, and will make this description more clear in the paper.\n\nFinally, thank you for the suggestion regarding Voxceleb, we will consider using the SUPERB[53] platform to test out other Voxceleb setups and other speech tasks. We will further add justification and references to earlier studies, e.g. on the nature of spectrograms, as suggested. Thank you. We hope our response have answered the reviewer's questions, but if not we are available and open to addressing any outstanding ones.\n", " This paper extends earlier work on masked autoencoders (MAE) with an application on self-supervised learning paradigm for audio processing. The main subject that is subjected to is mel spectrogram. Both encoder and decoder acquires transformer-based architecture and local attention is deployed at decoder side, out-performing solely global attention. Such network takes advantages of the nature of spectrogram, which encodes correlation on local time and frequency bands. ## Strengths\n1. The improvement from the conventional MAE encoder is clearly explained and simple to grasp and re-produced, with detailed ablation analysis on receiptive field.\n2. (Personal preference) The idea itself is closely related to the nature of spectrogram, instead of from the purely statistical perspective. Although the idea of regarding spectrogram as image patches is not new, this work put more emphasis on its relation with audio nature.\n\n## Weaknesses\n1. There are minor typos and grammatical mistakes in the paper. Please do proof-reading for the camera-ready version.\n2. Mel spectrogram has its own limitations in information sparsity (especially for higher frequency regions). Therefore, raw or linear spectrogram might be better choices for this, at least as an additional ablation study.\n3. VoxCeleb experiments. I am not 100% sure about how general ML community regard VoxCeleb, but officially it was proposed [1] as a verification task. Identification, in that case, can be regarded as an open-set verification.\n4. Probably some comparison with supervised models can be useful. But this is trivial.\n5. There are some explanation in section I and II lacks justification or pointers to earlier studies, especially on explanining the nature of spectrogram's relation to speech cues - all things there makes sense to me, but needs some references. 1. The dataset covered in the work and related tasks are not that common for speech processing research community. Could you please address why you used AudioSet to train the model?\n\n2. For random masking, is ther any specific technique involved for generating the masks and define the default mask ratios? From earlier works? Or just pretty random thing we cannot tell? There are two main limitations in this work which are not addressed:\n1. Acquisition of mel spectrogram only. It would be good if we try on various types of spectrograms.\n2. Since VoxCeleb is involved - a scoring framework might be beneficial for verification experiments.", " The author apply Masked Autoencoders from the vision domain to learn representations from audio spectrograms in a self-supervised fashion. The learned representaions can be fine-tuned to achieve competetive performance on different tasks. The learned representations can be fine-tuned for competitive performance on various tasks. The authors propose using local attention windows. Strengths:\n- Clear and well-written presentation.\n- Achieve SOTA on Audioset in a semi-supervised setup\n- Doesn't relay on Imagenet and knowledge transfer from the vision domain.\n- Extensive experiments on Audioset.\n\nWeaknesses:\n1. Low originality: the authors use MAE on spectrograms. I propose emphasizing the contributions of this work more clearly in the introduction (for example, adapting the local attention windows). Similar concurrent work  [42] [38] (as the authors point out clearly) exists. In particular, MaskSpec[42] is very close to the proposed work. In light of this, what are the contibutions of your work(question to the authors)?\n\n2. The authors argue repeatedly against knowledge transfer from Imagenet. However, the claims against are not quite justified, it's not clear what the drawbacks of imagenet are (question to the authors)?. For example, in line 32, what do you mean by label bias in this situation?\n\n3. The huge computational complexity, even for fine tuning, especially compared to models pre-trained on Imagenet.\n - For finetuning the supervised methods on Audioset: AST[10] can be trained on 4  (titan RTX) GPUS in 1 week. PaST[28]  (also uses masking with Patchout) in 25-50 hours on a single 2080ti.\n - In SSL: MaskSpec[42] takes 4 days on 8x  V100 for the pretraining phase. I think this can be close in compute requirements to what the authors report?\n - How do you justify the huge computation complexity with the limited improvement (within the error bars) on Audioset? (question to the authors) \n4.  The large computation complexity limits the usability of the models to extract representations (in my opinion).  (I suggest a large scale evaluation of the learned representaions on the tasks of the NeurIPS21 HEAR challange for example, https://hearbenchmark.com/ ) I've listed the questions in the weaknesses section. - The authors were up front about the similarity of their work to [42] [38]\n- The authors do not justify the claims about ImageNet transfer learning.\n- The large computational requirements compared to ImageNet transfer learning, with little improvments.", " The paper proposes a method for self-supervised learning for audio classification. It is based on the prior work MAE which does self-supervised learning on images. The paper under review adapted the MAE model to audio by uses the spectrogram as an image. The decoder was changed to use the local self-attention as the audio tens to have more local correlations.\n\nThen, the paper investigates the proposed architecture on several audio classification datasets. The paper conducts ablation studies on several aspects of the model: the masking strategies (finding that the unstructured masking works best for pre-training, and time+frequency masking works best for fine-tuning), the patch size and stride (finding that the overlap is not necessary), the encoder size, the decoder local or global attention, the length of the pre-training, and using the out-of-domain data for pre-training.\n\nFinally, the paper compares the results to the previously published works. The proposed model outperforms all the previous in-domain models. Then, the in-domain proposed model matches or outperforms some of the out-of-domain methods on some datasets, while it is worse on other datasets. # Strengths\n\nThe proposed model is sound. Applying and testing the CV models on audio is important and poses its own challenges which were overcome in this publication.\n\nThis paper conducts a thorough experimentation on multiple datasets with many ablations. The experimental section contains a lot of information and very interesting to read.\n\nIn general, the paper is very well written and easy to follow. I was able to all the logical steps and transitions. The amount of details is not overwhelming, but provides all the necessary information.\n\n# Weaknesses\n\nWhile I find it logical that pre-training on image data should not help much for audio training, the previous publications find it useful. This paper concludes that the out-of-domain data is not beneficial. This looks like an important area to address (see the question below).\n\n The paper argues in 4.4 that the out-of-domain training is not beneficial or even harmful. Nevertheless, the Table 2 shows that other methods benefit from the out-of-domain training. This needs some clarification. N/A", " This paper presents the application of Masked Autoencoders (MAE) to audio spectrograms in a self-supervised framework. The overall idea is to rely on masking to widely corrupt the input data and train a Transformer AE to fill the blanked part of the spectrogram, leading to a form of inpainting self-supervised task. The authors study several different masking strategies in the context of spectrogram, and also analyse the impact of local or global attention in the decoder. The quality of the proposed method is assessed in a downstream speech and classification task. Overall, the use of MAE for audio is an interesting avenue of research, as they perfectly fit to the self-supervised approaches. The experiments appear sound and are numerous, which is one of the major strength of the paper.\n\nHowever, the two main weaknesses of this paper is that the originality and amount of contributions is quite low (as the author state themselves it is a \"conceptually simple extension of MAE to audio\"). Although the authors study several properties of the model and input data, it feels more like an engineering paper and I would recommend to resubmit this paper to a more applied conference. Furthermore, the paper is sometimes not very well written with several long and hard to follow sentences. The same applies to the related work section, which is a laundry list of citations, without clearly delineating the advantages or flaws of past methods. - My major question is on what you call a \"unstructured\" strategy, if we refer to Figure 2, I do not see any conceptual difference between (b) and (d), given that the \"unstructured\" is simply also a selection of time-frequency patches with a given size ? I think a truly unstructured approach would rather look like a Bernoulli dropout over the whole spectrogram ?\n- I quite disagree with the overall premise (l.57) that \"unlike image patches, spectrogram patches are mostly locally correlated\" ... and I even think that it is exactly the opposite ! The stationarity property is way stronger in images and local correlation are largely more present than in spectrograms. You take the example of formants, which is actually the strongest local correlation that you could find, and even then vocal formants still follow an harmonic structure (hence global and cyclical correlation across frequencies). Any harmonic or noisy sound is a counter-example to this local correlation property.\n- You talk about the high redundancy in the spectrogram (l. 118), and I think that analysing this aspect would make this paper stronger. How much it impacts the masked learning depending on the sound nature ? -" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 4, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "KO3PBegHYN", "vr0DBQhJoK_", "E2g69DcD36z", "I3ou_oa1n6D", "6uwqr9kq9Yv", "vr0DBQhJoK_", "EVxEmlFp7x", "EVxEmlFp7x", "flF80iOB5PD", "FNNiH84ZhJ", "gladqu-Tt-F", "nips_2022_MAMOi89bOL", "nips_2022_MAMOi89bOL", "nips_2022_MAMOi89bOL", "nips_2022_MAMOi89bOL" ]
nips_2022_Ho_zIH4LA90
MinVIS: A Minimal Video Instance Segmentation Framework without Video-based Training
We propose MinVIS, a minimal video instance segmentation (VIS) framework that achieves state-of-the-art VIS performance with neither video-based architectures nor training procedures. By only training a query-based image instance segmentation model, MinVIS outperforms the previous best result on the challenging Occluded VIS dataset by over 10% AP. Since MinVIS treats frames in training videos as independent images, we can drastically sub-sample the annotated frames in training videos without any modifications. With only 1% of labeled frames, MinVIS outperforms or is comparable to fully-supervised state-of-the-art approaches on YouTube-VIS 2019/2021. Our key observation is that queries trained to be discriminative between intra-frame object instances are temporally consistent and can be used to track instances without any manually designed heuristics. MinVIS thus has the following inference pipeline: we first apply the trained query-based image instance segmentation to video frames independently. The segmented instances are then tracked by bipartite matching of the corresponding queries. This inference is done in an online fashion and does not need to process the whole video at once. MinVIS thus has the practical advantages of reducing both the labeling costs and the memory requirements, while not sacrificing the VIS performance.
Accept
All four reviewers are positive about this work. Reviewers appreciate the clear writing, simple yet highly effective idea, and strong experimental validation on three Video Instance Segmentation datasets. The authors responses further clarified and sufficiently addressed the concerns from the reviewers. The AC reads the reviews, the rebuttal, and agree with the reviewers to recommend acceptance.
train
[ "hW0cKpI9t2w", "S-y9H7Biwre", "Rf5DHj97Cxp", "D_OYpoCiXo2", "F9Wctj1MK8R", "sU0eVOMXv0m", "ZYHxOlV4Wdt", "9Gq-S1dWsPn", "rHwY6DOquAQ", "sg6Z1sLqoK4" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, thank you very much for the appreciation of this work!", " I would like to thank the authors for their responses. Most of my concerns are addressed. I’ve read the comments from other reviewers and the author responses. I’ve also read the revised manuscript and appendix (the paragraphs highlighted in blue). The additional experiments and qualitative results for failure cases are useful analysis.\n\nOverall, the paper proposes a neat idea that works effectively on common VIS benchmarks. I think it’s a nice paper and will be of interest to the community. I therefore recommend to accept the paper. ", " Thank you for your comments. We have revised the paper based on your comments. Please find our responses to specific questions below.\n\n - **Technical Contribution**: We agree with the reviewer that it is important to clarify our technical contribution. We would like to clarify that our approach is not limited to Mask2Former as pointed out by the other reviewer. Our key finding is that instance tracking naturally emerges in image instance segmentation models with proper architectural constraints. It is non-trivial to find that video-based architecture and training are not required for competitive VIS performance. This finding is especially important when most of the recent approaches in VIS are dominated by the per-clip paradigm. We further leverage this finding to achieve state-of-the-art VIS performance on multiple datasets (over 10% improvement on OVIS) by only training an image instance segmentation model. Our approach thus directly brings advances in image instance segmentation to the video space and has practical advantages including reducing required supervision in our experiments (can use only 1% of labeled frames).\n\n - **Visualizing Embeddings for Supervised Matching**: We agree with the reviewer that it is important to further analyze supervised matching in Section 4.3. We conduct the same visualization for MinVIS + Supervised Matching + Limited Range on YouTube-VIS 2019. More details are in Appendix G. While the plots look similar for most videos, one consistent trend we observe is that adding supervised matching makes the embeddings more evenly distributed and smooths out the outliers in the embedding space. Some examples are shown in Figure 7. This is a reasonable consequence as the objective encourages the embeddings from the same object instance to be closer to each other. However, it is unclear whether this is overall beneficial to our tracking by query matching. For example, in $V_3$ in Figure 7, the outliers are removed at the cost of mixing embeddings from different instances. \n\n - **Efficiency Benefits of MinVIS**: We agree with the reviewer that our image-based approach should have computational advantages. The key difference compared to per-clip approaches is that now our Transformer decoder only needs to process tokens in an image, instead of having a complexity that’s quadratic with respect to the number of frames. We conduct further benchmarking and find that the Transformer decoder component only accounts for less than 10% of the computational time in our current implementation. A large part of the computation is in the visual backbone processing of each frame, which is the same for our approach and per-clip baselines. Thus, we do not observe significant speed improvements. However, our approach does have a significant memory advantage especially for long videos, as our memory complexity is in principle independent of video length. \n\n - **Limitations of Not Using Video-based Training**: We agree with the reviewer that videos provide lots of extra information that we are not currently leveraging. We have expanded Appendix A to discuss limitations of not incorporating any video-based architectures/training procedures in more depth. More specifically, temporal supervision should improve our tracking performance to prevent failure cases in Appendix F. In addition, video information is also beneficial to explore semi-supervised settings, where we temporally propagate the annotation to improve the performance of our model trained with sub-sampled annotations.", " Thank you for your comments. Please find our response to your question below.\n\n - **Technical Contribution**: We agree with the reviewer that our approach uses Mask2Former’s query embedding for instance association and it is important to clarify our contribution. We would like to clarify that our approach is not limited to Mask2Former as pointed out by the other reviewer. Our key finding is that instance tracking naturally emerges in image instance segmentation models with proper architectural constraints. It is non-trivial to find that video-based architecture and training are not required for competitive VIS performance. This finding is especially important when most of the recent approaches in VIS are dominated by the per-clip paradigm. We further leverage this finding to achieve state-of-the-art VIS performance on multiple datasets (over 10% improvement on OVIS) by only training an image instance segmentation model. Our approach thus directly brings advances in image instance segmentation to the video space and has practical advantages including reducing required supervision in our experiments (can use only 1% of labeled frames).\n", " Thank you for your comments. We have revised the paper based on your comments. Please find our responses to specific questions below.\n\n - **Qualitative Results on Failure Cases**: We agree with the reviewer that it is important to provide analysis on model failures in our paper. The additional results and discussions are in Figure 6 and Appendix F. As discussed in Section 3.2, MinVIS does not use any heuristics to handle the birth and death of object instances. The death of an object instance is correctly handled if its query is matched to a query in the next frame that produces an empty mask. Despite its simplicity and effectiveness, the drawback of this approach is that there is nothing stopping the model from matching the disappearing query to a query with a non-empty mask. The top of Figure 6 shows an example when two fishes on the left leave the frame, MinVIS associates them with non-empty masks on nearby fishes. On the other hand, while MinVIS correctly handles the object birth and death in the bottom row, MinVIS is still limited by the image segmentation model, which fails to segment the close-up person. \n\n - **t-SNE Visualization on Validation Sets**: We agree with the reviewer that visualizing embeddings in validation provides better understanding of our approach. The visualization is in Figure 5. Since our visualization uses groundtruth, which is not publicly available for validation sets of the datasets used in this paper, we further use a 90/10 split on the training set for this visualization. The details are in Appendix E. Despite being noisier than training videos, the query embeddings are still grouped into clusters by object instances for videos not used in training. This is also quantitatively supported by our state-of-the-art VIS performance on the three datasets.\n\n - **Compute Times**: Since our models are initialized from COCO-pretrained weights, we only need to fine-tune for 6k iterations on YouTube-VIS 2019 (10k for OVIS). The whole training takes about 1 hour for R50 on 8 V100 GPUs, and about 1.25 hour for Swin-L on 16 V100 GPUs. For inference, each video on average takes 0.9s for R50 and 2.0s for Swin-L \n", " Thank you for your comments. We have revised the paper based on your comments. Please find our responses to specific questions below.\n\n - **Data Augmentation**: We use standard data augmentation strategies as in previous works: randomly resizing shortest edge to [360, 480], and randomly flipping the video clip. We use the same data augmentation for all settings (1%, 5%, 10%, Full Supervision).\n\n - **Reduced Supervision for Baseline**: We agree with the reviewer that it is important to also apply our annotation sub-sampling experiment to per-clip baselines. We thus also apply our low data settings to Mask2Former-VIS. The full results are in Table 9 and Appendix D. MinVIS consistently outperforms Mask2Former-VIS in all settings. The improvement increases for all three datasets when we sub-sample the annotation: +1.2% for full supervision v.s. +1.7% for 1% supervision on YouTube-VIS 2019. +2.7% for full supervision v.s. +5.8% for 1% supervision on YouTube-VIS 2021. +13.6% for full supervision v.s. +17.2% for 1% supervision on OVIS. \n\n - **Tables with Standard Deviation**: Tables with standard deviation are in Appendix C, which is originally in the supplementary. We have now moved the appendices from supplementary to after the main paper.\n\n - **Temporal Sampling on OVIS for Baseline**: The reported results on OVIS use default temporal sampling on YouTube-VIS. However, we have also experimented with the limited range strategy for Mask2Former-VIS, which did not improve the results. Therefore, we report the default temporal sampling result for consistency. As discussed in “Implementation Details” the only hyperparameter change for OVIS is to increase the number of iterations from 6k to 10k.\n\n - **Figure 4 Explanation**: We agree with the reviewer that Figure 4 requires further explanation. While both MinVIS and Mask2Former-VIS are built on the Mask2Former architecture, this does not imply that both would have the same behavior because even the same architecture can still learn different model weights during training. The difference in training is that Mask2Former-VIS’s training objective asks the model to directly resolve tracking under heavy occlusion. This is hard to optimize and affect the model performance. We have included the training curves comparing Mask2Former-VIS and MinVIS in Appendix H for further illustration. Mask2Former-VIS has a much higher total loss compared to MinVIS in training, which affects the segmentation performance in inference.", " The paper presents a video instance segmentation method for simultaneously segmenting and tracking objects in video sequences. The proposed per-frame method first uses the query-based MaskFormer [27] to segment objects frame by frame, and associate object segments in different frame by matching the queries. The proposed method achieves SOTA performance on three VIS datasets. Additionally, the paper demonstrates competitive results of the proposed method when 1%, 5% and 10% of the training data is available. **Strengths**\n\nThe proposed approach takes the advantage of the design of the query-based image segmentation method and extend it to tracking object segments effectively. The proposed method achieves SoTA performance across three VIS datasets.\n\nThe proposed method only requires training the query-based image segmentation method. I appreciate the simplicity.\n\nThe paper is easy to follow.\n\n****\n\n**Weaknesses**\n\nThe key idea of the proposed approach is to use queries to associate object segments in different frame. In terms of originality, this is not entirely novel and have been used and studied in other related tasks, e.g., multiple object tracking (MOT). Importantly, the paper ignores literature review on related query-based works that also use queries for object association and tracking or related video tasks.\n \nIn the experiments, the authors show competitive results of their approach with little amount of training data. However, it’s unclear whether other baselines also have little reduction in the performance even with little amount of data. Note that data augmentation can and should be applied for those baselines, and it is still a fair comparison in terms of the amount of training data. \n\nThe method highly depends on MaskFormer’s architecture design and its segmentation results.\n As mentioned in weaknesses above, what are the performance of baselines with 1%, 5%, 10% of the training data? Preferably data augmentation should be performed. Note that the clip-based baseline MaskFormer-VIS uses clips consisting of only 2 frames for training. Therefore, one can generate pairs of frames for training using data augmentation even in the 1% setting\n\nWhat are the data augmentation strategies for training MaskFormer in the proposed approach, especially in the low data settings? Is the data augmentation performed in the low data settings different from the one with full data?\n\nWhy don’t Table 4 and 5 have the standard deviation included? It seems hard to conclude which strategies are better as the APs for some of them are very similar (within 0.2 difference).\n\nAccording to Ln 258, the temporal sampling strategy during training is important. What is the sampling strategy for the baselines? Do the authors try different sampling strategies and data augmentation on OVIS dataset except for those default ones on YouTube-VIS?\n\nIn Fig 4 column $t_1$ to $t_4$, why does the baseline Mask2Former-VIS miss to detect the second sheep from the right in the image? It doesn't make a lot of sense to me that MinVIS can detect that object but Mask2Former fails given that both of them are highly based on MaskFormer. Is it because Mask2Former-VIS uses a single query for an object in the video, and using only a single query cannot adapt to the drastic appearance change in this case? If it is the case, the choice of applying Mask2Former-VIS to clips with length of 30 (Ln 246) might not be optimal. Yes, the authors addressed the limitations and potential negative societal impact in section 4.3 and the supplementary document.", " The paper proposes a novel video instance segmentation framework called Min-Vis which leverages transformed based architecture to segment object instances in videos. The key contributions are:\n\n* The paper demonstrates that good performance can be obtained for this problem even by using image level instance segmentation annotations alone. And instance tracking problem can be solved by conducting bipartite matching on object queries between adjacent frames without requiring any training.\n\n* The paper also demonstrates that because of the simplicity of its design, it can even be trained with much smaller dataset size.\n\n* The paper achieves state-of-the-art performance on multiple VIS benchmarks. Strengths:\n\n* This is a solid paper with state-of-the-art results for the problem of video instance segmentation. The gains on YouTube-VIS 2021 and OVIS dataset are impressive. The paper also presents good ablation studies. Looking at the current leaderboard for YouTube-VIS dataset, the proposed method looks to be the best. \n\n* The proposed approach is fairly generic and can be extended to other tasks beyond instance segmentation. More experiments are needed but if the assumption that cosine similarity between object query embeddings is enough to associate instances between frames holds, this will be a natural method to apply on multi-object tracking problems as well. \n\n* The ability of being able to train video instance segmentation models using image only instance segmentation masks brings a significant practical advantage to this method compared to others. \n\nWeakness:\n* More datasets like DAVIS benchmark and \"Unidentified Video Objects\" from Meta can be used for additional experiments. This will provide more diversity to the benchmarks on which the method is evaluated. \n* No analysis on model failures are presented in the paper, a brief discussion with some visual examples could provide more insights into where the current approach breaks down. \n* Visualization of object embeddings (Figure 3) should be presented for validation sets as well for better understanding. \n More details on compute times for training and inference will be interesting to learn about. Yes", " This paper proposes a new video instance segmentation (VIS) framework, MinVIS, which achieves state-of-the-art VIS performance with neither video-based architectures nor video-based training procedures. With this image-based framework, this work is more flexible on the requirements of training data. Strengths:\n1) This paper achieves SOTA results on various video instance segmentation datasets. \n2) The method of this paper is simple but satisfied in performance.\n3) This paper is well organized.\n\nWeakness:\n1) The technical contribution of this paper is a bit sufficient. The critical contribution of this paper is to utilize the query embedding from Mask2Former [27] and prove it is effective for instance association, but no more novel idea or method is proposed.\n As described in weakness. No serious negative societal impact in this work.", " The paper proposes MinVIS, a minimal video instance segmentation system (VIS) that obtains SOTA results on a variety of VIS benchmarks without employing video-based architectures or video-based training. The proposed system operates in two stages: 1) first, an image-level instance segmentation is trained, 2) afterward, the frame-level instance associations across frames are built using a bipartite matching algorithm. The authors demonstrate that despite not using any video-related information, their proposed framework achieves highly competitive results in comparison to video-based VIS solutions. Strengths:\n+ The authors tackle a challenging and highly impactful problem of video instance segmentation (VIS).\n+ The proposed system is very simple, which would provide a convenient baseline for other researchers to build on.\n+ The experiments are thorough and convincing, i.e., the proposed method achieves state-of-the-art results on multiple VIS benchmarks.\n+ The paper is easy to read and understand.\n+ Additional experiments showing strong performance with a limited amount of labeled VIS data.\n+ The proposed method can be used in an online setting.\n\nWeaknesses:\n- Overall, I enjoyed reading the paper. The proposed approach is simple, yet highly effective. My only concern is that the proposed approach feels somewhat incremental, i.e., the technical contributions are quite limited. To be more precise, the proposed framework is largely built on Mask2Former, which was also originally developed for images. The bipartite matching algorithm to associate instances across frames has been widely used both in the image-only settings and in the video-related settings in the past. The only substantial contribution that I could spot is in the design of a prediction head, i.e., imposing the constraint to have Q convolve with the whole feature map F_{−1}. It's a small, yet important design choice that simplifies instance segmentation prediction. \n- In my view, the authors could discuss the limitations of not incorporating any video-based architectures/training procedures in more depth. Video provides lots of extra information that the authors are ignoring in this case. While I appreciate the simplicity and effectiveness of their solution, I think it would also be useful to have a more detailed discussion how their framework could be improved while incorporating video cues. Section 4.3 does not provide a good discussion on that. \n- The authors should carefully proofread their draft. There are some typos. Questions:\n1. Could the authors briefly summarize their TECHNICAL contributions, and also present a detailed comparison with the Mask2Former system, and how their proposed approach differs from this prior work.\n2. I was quite puzzled by the results obtained using video-based training in 4.3. It's quite surprising to see the performance drop, and the justification that this happens due to occlusions doesn't seem very convincing to me. This is based on my own past experiences, where such pairwise matching-based training was highly successful for tracking. It would be useful if the authors could visualize the embeddings resulting from such training as they did in Figure 3 of the original draft. \n3. It would also be useful if the authors could incorporate some efficiency metrics. I imagine that the proposed approach is more efficient than prior video-based systems. It would be good to highlight such advantages. In my view, the authors could discuss the limitations of not incorporating any video-based architectures/training procedures in more depth. Video provides lots of extra information that the authors are ignoring in this case. While I appreciate the simplicity and effectiveness of their solution, I think it would also be useful to have a more detailed discussion how their framework could be improved while incorporating video cues. Section 4.3 does not provide a good discussion on that. " ]
[ -1, -1, -1, -1, -1, -1, 7, 8, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "S-y9H7Biwre", "sU0eVOMXv0m", "sg6Z1sLqoK4", "rHwY6DOquAQ", "9Gq-S1dWsPn", "ZYHxOlV4Wdt", "nips_2022_Ho_zIH4LA90", "nips_2022_Ho_zIH4LA90", "nips_2022_Ho_zIH4LA90", "nips_2022_Ho_zIH4LA90" ]
nips_2022_7b7iGkuVqlZ
Unsupervised Learning of Equivariant Structure from Sequences
In this study, we present \textit{meta-sequential prediction} (MSP), an unsupervised framework to learn the symmetry from the time sequence of length at least three. Our method leverages the stationary property~(e.g. constant velocity, constant acceleration) of the time sequence to learn the underlying equivariant structure of the dataset by simply training the encoder-decoder model to be able to predict the future observations. We will demonstrate that, with our framework, the hidden disentangled structure of the dataset naturally emerges as a by-product by applying \textit{simultaneous block-diagonalization} to the transition operators in the latent space, the procedure which is commonly used in representation theory to decompose the feature-space based on the type of response to group actions. We will showcase our method from both empirical and theoretical perspectives. Our result suggests that finding a simple structured relation and learning a model with extrapolation capability are two sides of the same coin. The code is available at https://github.com/takerum/meta_sequential_prediction.
Accept
While there was a certain lack of enthusiasm in the scores of the reviewers, the author's answers cleared the concerns of the reviewers participating in the discussion and overall the recommendation leans towards acceptance. This paper is, in the reviewers' opinions, sound and adds to the literature on unsupervised learning of symmetry. The formulation (of learning symmetry by only modelling linear transitions) is nicely simple. Experiments and evaluations generally were considered of adequate quality.
train
[ "qsOFATNlng3", "2MAsWYKXaV9", "IwtBAm74TIJ", "RSCNYdOPnIz", "mjoN0KYS8y", "sQLbdUfBBLX", "_EuDgakG8_g", "itNlZP5Xajg", "vjGtEqfUY9f", "ZWPyMQB_PAj", "TUSCUkKtAt8", "R7oGXWdDe3S_", "047ikamlCzD", "qBhCpCrcUti", "FjO0jJwnbpr", "1bvMFKhj01", "RknCLq83Z6e" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the further clarifications and the updates to the draft.", " Thank you very much for your comment, and we are glad our response clarifies your concerns.\n\n> The reason that the prediction task gives disentanglement is that by representation theory, an equivariant model gives features that can be simultaneously diagonalized.\n\nYes, if the model is equivariant, the representation theory guarantees that $M^*$ can be simultaneously block-diagonalized across different orbits by an appropriate change of basis matrix $U$. Using such $U$, the original feature is transformed to $U\\Phi(s_t)$, for which each block of $UM^*U^{-1}$ acts on the corresponding subspace of $U\\Phi(s_t)$ exclusively.\n\n\n> Even though the model is not shown to be equivariant theoretically (i.e. Prop 3.1 & 3.2 are related to orbits), empirically the model is found to be equivariant.\n\nYes, although Prop 3.1 and 3.2 together show that $M^*(g, x)$ with the same $g$ are similar (in the sense of linear algebra) on different orbits, they do not fully prove the equivariance because the \"equivariance relation\" requires that $M^*(g, x)$ with the same $g$ do not depend on $x$ at all. \nAnd yes, the full equivariance is achieved by our proposed meta-learning algorithm. \n\n\n> highlight this connection a bit more and de-emphasize the meta learning aspect (since it’s a simple training trick without theoretical justification).\n\nIn our current manuscript, we have some emphasis on the meta-learning aspect because, as we empirically demonstrate in ablations (Section 5), this aspect seems to be important in achieving the equivariance that cannot be justified with Thm 3.1 and Thm 3.2 alone.\n\nWe shall note that the fact that the equivariance relation has a direct connection to disentanglement is a well-known fact [Reference 4, 8 in our initial response]; the achieved disentanglement supports our claim that our model is successfully learning an equivariant model.\nAt the same time, identifying the hidden equivariance relation in the dataset is still a big challenge today, \nand our theory shows that we can make the model “almost equivariant” by training the model to be able to predict the future with linear transition in the latent space. \nHowever, the empirical evidence shows that, with our “meta” framework, the equivariant model can be learned, and we show in our ablation study that the “meta” part of our framework is important (In section 5, Neural $M^*$ corresponds to the non-meta version of our training procedure). \nTo clarify this point, we modified the end of our introduction by reordering the statements and changing some of the expressions.\n", " Dear Reviewer htB6,\n\nWe have made a detailed reply to your comments and revised our manuscript to reflect your concerns.\nIn our rebuttal comments, we discussed the relationship between VAE and our work. Also, we described the detailed setting on the comparative experiments against SimCLR and CPC. \nWe also noted the motivation behind each variant of our method and added the description to the revised manuscript in the appendix D.1.\nWe would appreciate it if you give us further feedback on the revision and our rebuttal.", " Thank you very much for the detailed responses and clarifications. I've adjusted my review, but since I missed the key point of the paper previously, I want to make sure that my understanding is now correct:\n- The reason that the prediction task gives disentanglement is that by representation theory, an equivariant model gives features that can be simultaneously diagonalized.\n- Even though the model cannot yet be proven equivariant theoretically (i.e. Prop 3.1 & 3.2 are related to orbits), empirically the model appears to be equivairant and discovers disentangled factors.\n\nIf the above understandingly is correct, then perhaps one way to make the main message clearer is to highlight this connection a bit more and de-emphasize the meta learning aspect (since it's a simple training trick without theoretical justification).\n", " Thanks for the comments. I have gone through the updated PDF and see a number of changes from the intro to the related works (including more references on Koopman, which I think are relevant), description, experiments. (As an aside, I think there is a typo in figure 6). I again looked at the appendix based on the suggestions by the authors, and do feel the presentation has seen some improvement. Although I still feel there might still be quite some room for improvement. I will raise my score by a notch. ", " *About Typographies*\n \nThank you very much for pointing out our typos. We have fixed the errors as below: \n \n> Please fix citation [13]” \n\nThank you, we fixed the citation. \n\n> Line 115: do you mean members of $S$ (rather than $s$ which is itself a member of $S$)?\n\nYes, thanks for pointing this out. We meant to say “members of $S$”.\n\n> Line 179: should it be \"optimized on ${\\bf s_p}$?: \n\nThank you very much. This is indeed a typo. We fixed it.\n\n> Inconsistent notations: ... \n\nTo reduce confusion with $H_{+1}$, we stopped to use $H_i$ to denote $\\Phi(s_i)$. We also fixed the typo of $t_p$. \n\n> Line 233 & 234: it would be more consistent to use either 256 or m (but not both)...\n\nWe fixed the notation accordingly. \n\n> The paragraph on line 252: it would be better to briefly explain what \"better representation\" means.\n\nWe made sure to reiterate that we intended here to measure the “wellness” of the representation by predicting multiple structural features in the dataset (See Figure 8). \nWe also added in the revision the result of regressing the digit class (Figure 9). \n\n> Line 289: there seems to be a typo in the definition of $\\hat{M}^*$ \n\nThank you very much, this is a typo and we fixed it. We should have had $A(\\hat{M}^*):=$ in place of $\\hat{M}^*:=$ here.\n\n**References**\n\n[1] A. Hyvarinen, and H. Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ica. NeurIPS, 2016.\n\n[2] S. Zhang, Y. Wang, and A. Li. Cross-view gait recognition with deep universal linear embeddings. CVPR, 2021.\n\n[3] T. Keller and M. Welling. Topographic vaes learn equivariant capsules. NeurIPS, 2021. \n\n[4] R. Kondor. Group theoretical methods in machine learning. Columbia University, 2008.\n\n[5] P. Toth, et al. Hamiltonian generative networks. ICLR, 2020.\n\n[6] J. Hsieh, et al. Learning to decompose and disentangle representations for video prediction. NeurIPS, 2018.\n\n[7] R. Kabra, et al. Simone: View-invariant, temporally-abstracted object representations via unsupervised video decomposition. NeurIPS, 2021.\n\n[8] T. Cohen and M. Welling. Learning the irreducible representations of commutative lie groups. ICML, 2014.\t\t\n\n[9] L. Falorsi, et al. Explorations in homeomorphic variational auto-encoding. arXiv preprint arXiv:1807.04689, 2018.\t\t\n\t\n[10] S. H. Weintraub. Representation Theory of Finite Groups: Algebra and Arithmetic, volume 59. American Mathematical Society, 2003. \n\n[11] A. Zhou, T. Knowles, and C. Finn. \"Meta-learning symmetries by reparameterization.\" ICLR, 2021.\n\n[12] N. Dehmamy, et al. \"Automatic Symmetry Discovery with Lie Algebra Convolutional Network.\" NeurIPS, 2021.\n", " > Line 276, about simultaneously block-diagonalizable: please provide reference for this.\n\nWe added [4] as a reference to the simultaneously block-diagonalization. Please see Section 1.2 Representation Theory for the details.\n\n\n> Line 124: is $\\circ_{latent}$ different from $\\circ$?\n\nWe would like to point out that $\\circ$(action in the observation space) is different from $\\circ_{latent}$(action in the latent space), and we are sorry if this notation was confusing. We denote the action on input space by the binary operation $\\circ: G \\times X -> X$ while we denote the action on the latent space by $\\circ_{latent}: G \\times R^{a\\times m} \\rightarrow R^{a\\times m}$. In the revised manuscript, we define the latent space action without using \\circ_{\\rm latent} to avoid the confusion.\n\n\n> Line 143: Maybe consider a different notation than $M(g, s)$ \n\nWe are sorry that this notation was confusing. We were originally hoping to distinguish the transition operator in the ideal model from the “estimated transition operator” corresponding to the group element $g$ by using the subscripted $M_g$ to refer to the former and using $M(g,s)$ to refer to the “transition operator estimated from the sequence that begins with s and transitions with $g$“. \nWe revised Section 3 to include the description of the difference between $M(g,s)$ and $M_g$. \n\n> Line 220: \"digit4 only training\": does it mean that the encoder/decoder have not seen other digits during training?\n\nYes. The models only observed the examples of digit 4 during the training of the encoder/decoder. For the evaluation of downstream regression task, we trained a linear classifier on feature space for each model using the training sets containing all digits, and evaluated the regression error ($R^2$ score) on the test sets containing all digits.\n\n\n> Line 233: what's the reason for splitting $H$ into 8 (and not some other number of) subtensors? Line 250: what does \"fixed block structure\" mean? Does dividing $H$ into 8 subtensors count at fixing the block structure?\n\nAs we report in our manuscript, the set of $M^*$s obtained from the latent variable that were trained to well-predict the future in our framework can be simultaneously block-diagonalized, and each block has a theoretical connection with the irreducible representation. We emphasize, however, that this simultaneous block-diagonalizability was not achieved by some explicit algorithm or mechanism to learn it, but by simply training the model to be able to well-predict the future. We therefore wanted to experiment what happens if we introduce the inductive bias of the block-diagonal structure and train the network so that $M^*$ from each sequence will be a direct sum of a fixed number of 2x2 blocks (Say $M_i*$; i=1,..8). When $M^*$ is assumed to take such a form, learning $M^*$ acting on $\\mathbb{R}^{2 \\times 256}$ dimensional tensor is equivalent to learning $M_i^*$ acting on $2 \\times 256$ dimensional tensor for each $i=1,...,8$. \nIn the pioneering work of [8] that learns the symmetry in a linear system, the authors hard-code these blocks (See equation (2) of [8]). Our result suggests that, when the objective function is appropriately constructed, such a structured inductive bias might be unnecessary. We also discuss the motivation of our ablation study in our response to the Reviewer htB6.\n\n> Line 303: what does \"sufficiently large\" mean for $m$?\n\nWe are sorry for the lack of explanation. If $m$ is less than $a$, we cannot obtain the pseudo inverse because of the rank deficient in $\\Phi(s_{t-1}) \\Phi(s_{t-1})^{\\rm T}$. Thus $m$ should be at least larger than $a$. We added this explanation to the revised paper.\n", " > The choice of $M$ seems to rely on the knowledge of true dynamics. How robust is the method to model mismatch?\n\nWe do not assume anything about the true dynamics other than the stationary properties, such as constant acceleration / constant velocity over short frames (as short as three). Such an assumption is natural when the dynamics is continuous. [1,2] also takes the similar strategy of splitting the time sequences into shorter time frames over which the structure of the dataset is stationary (The latter for example uses such sub-sequences of length as long as 16). We indeed expect our model to fail in prediction if the future velocity/acceleration changes dramatically. \nWe would also want to emphasize that the goal of our study is not to propose a general method of video prediction, but to point out the novel connection between extrapolation ability and the sequential/algebraic data structure, which were previously discussed based on hard-coded inductive bias. As the most important consequence, we are able to infer from each sequence a transition operator that causes a consistent effect on all other sequences (i.e. equivariance). \nPlease See [3], for example, for previous efforts to identify the structure in sequential dataset. \n\n\n> Would the choice of $T_c,T_p$ affect the performance?\n>> Choosing $T_c=2$ should be sufficient if the data perfectly follows the model assumption (e.g. constant velocity), but this may not be the case when there's model mismatch.\n\nIndeed, we observed that using longer prediction horizon $T_p$ or longer $T_c$ generally improves the performance. Also, on noisy sequential datasets (e.g. when the velocity of angles are sampled from gaussian distribution), using larger $T_c$ should be particularly effective since the internal least square algorithm absorbs the noises when $T_c$ is large. However, as we showcase in our manuscript, $T_c=2$ and $T_p=1$ sufficed for competitive performance. This implies that our extrapolation objective is uncovering the representation-theoretic symmetry in the dataset from mere triplets. \n\n> Would the proposed method be able to handle multiple objects?\n\nWe have been conducting preliminary experiments and we are seeing some promising results with additional modules; however, because the prediction itself is not the main focus of this study, we excluded our preliminary results from the scope of our manuscript.\n\n\n> Sec 5.2, swapping $M^*$: my understanding is that $M^*$ should depend on the sequence-specific action. However in Fig 4(a), the two sequences clearly have different actions (e.g. the bottom row seems to be doing both color rotation and shape rotation, while the top row only has shape rotation) and different orbits (since the digits are not the same). How could be the $M^*$ same then?\n\nFirst, let us clarify that the actions applied in the pair of sequences in Figure 4a are exactly the same; the velocities of hue angle and rotation angle are shared across the pair. Because of the definition of the HSV space, the change in the value of the hue might not be in agreement with the human perception: For example, one might see the gradient around yellow w.r.t the hue value is steeper than the gradient around the red (like in our example in Figure 4a). Also, it was difficult to clearly perceive the changes in color in the original version of the figure, because we were shading the images in the $T_c$ part of the sequence to distinguish them from the $T_p$ part of the sequence. We therefore removed the shade for better visibility. We are sorry for the lack of clarity on this matter. \nWe would like to note that the most important fruit of our work is the way to infer $M^*$ that is exclusively dependent on $g$; indeed, our inferred $M^*$ seems to be dependent on $g$ but independent of the choice of $s_1$. \nAs an important consequence, we can apply one $M^*$ inferred from one sequence to another sequence in a consistent manner. \nWe investigate in our theory (Section 3.1) the mechanism underlying this acquired equivariance property. \n\n\n> Fig 8 in the appendix: line 255 in the main text says that the proposed method gives \"better representations\" but it's unclear how so. Please explain the success/failure modes of the proposed method and the reasons.\n\nBecause we were talking about the ability of our model to predict the transition parameters from the estimated $M^*$, we used the word “wellness” here in the context of representation learning. We removed the ambiguous expression in the revision. We also added the result of linearly regressing the digit classes in Appendix, and our representation performs competitively on this task as well (Figure 9). \nHowever, as we mention in our response to all reviewers, the primary merit of our study is not about downstream tasks. We presented the results in Figure 8 to study how our representation relates to the structural factors of the dataset. Please also see our response to Reviewer dgWX.\n\n", " Thank you very much for spending time to review our paper. Below we respond to each concern and comment in your review. \nPlease see the last part (part4) of this response for the numbered references used in our comments below.\n\n> it seems unfair to compare with baseline models that do not have such knowledge about the data.\n\nOn the contrary, all the baseline models we compare against our method in the experiment section (Neural M*, Rec Model, Neural Transition) also assume that the transitions are stationary in each sequence and seek a time-invariant operator to predict the future, so that the comparisons are therefore made on the same background of prior knowledge. In particular, all but Neural Transition assume that the latent of each sequence transitions with a sequence-specific linear operator $M$ in the way of \n$\\Phi(s_{t+1}) = M \\Phi(s_{t})$, just as assumed in our proposed model. Neural Transition assumes the model $M(\\Phi(s_{t-1}), \\Phi(s_{t})) = \\Phi(s_{t+1})$ with a possibly nonlinear function $M$. This transition model too is based on the same prior knowledge.\n\nFor the motivations and more detailed discussion on the ablation studies, please see our response to htB6. Also, because we are computing $M^*$ by Least Square regression, by construction our method is robust against some degree of homoscedastic fluctuation in the speed as well. \nWe would also like to emphasize that one message we would like to convey in this paper is that we can take advantage of the continuity of the time series dataset to automatically capture the hidden structure of the dataset by simply aiming to improve the future prediction performance. \n\nThe situation in which we can obtain a set of short stationary sequences is also not irregular as well, because any continuous sequence with sufficient time-resolution is always piece-wise stationary in the way of local Taylor approximation. \n \n> Please explain why the connection to meta-learning is necessary or how it can help provide more insights.\n\nIn our paper, we are experimenting with what we call “Neural $M^*$” in order to answer this question. In the “Neural $M^*$” version of our method, we are updating the $M^*$ and $\\Phi$ simultaneously without constructing the explicit inner loop. \nThis is in contrast to our main proposed approach that solves the inner optimization completely and directly backpropagates the change in $M^*$ induced by the change in $\\Phi$, which is optimized in the outer loop. \nThe comparison between “Neural $M^*$” and our main proposed method suggests that there is something important about this *meta* aspect in acquiring the extrapolation capability. \nWe believe that further investigation of this phenomenon is the most important agenda in our future works. \n\n\n> Please discuss the relation to the line of work about incorporating physics into the prediction model, e.g. PhyDNet (Guen & Thome 20), and disentangled representation learning in videos in general.\n\nThe “disentanglement” discussed in PhyDNet is significantly different from the “disentanglement” in our paper, both in terms of purpose and nature. \nPhyDNet disentangles the “dynamics that can be explained with a certain form of PDE” from the residual dynamics. PhyDNet also engineers its architecture for the purpose of achieving this “disentanglement”. \nHamilton generative network (HGN) [5] is another instance of the physics-inspired prediction model. It tries to encode sequences into (abstract) momentum and position variables.\nAs another form of disentanglement, [6] as well as [7] take the approach of separating the time-invariant component from time-variant component. \n\nMeanwhile, the disentanglement that is discussed in our paper is more relevant to those discussed in [8,9], which pertains to the algebraic decomposition of the transition operators in the field of representation theory [8,10] and symmetry learning [11,12]. \n\nAlso, we would like to emphasize that, unlike many studies with a primary focus on disentanglement, we do not have any specific design of disentanglement in our framework; as we report in our work, the disentanglement emerges as the byproduct of our endeavor in training the model to predict the (possibly very short) time sequence through meta-learned linear latent operators.\nAs we discuss in the theory section, this surprising coincidence is in alignment with the theory of group representation and equivariance. \nThis is in strong contrast to previous representation learning methods for video and the family of studies whose primary goal is the disentanglement of some specific form, as we are discovering the disentangled structure without explicitly seeking it. \n", " > Regarding the comparison against SimCLR and CPC, I find it a bit hard to understand the experiments and interpret the results. Can you please provide more details?\n\nWe conducted this experiment to compare how well each method encodes the information of transition in each sequence. This is the reason why the regression targets are those that determine the transition (translation, color rotation, etc.). Please recall that our model unwittingly recovers the hidden structure in the sequential dataset by simply training the model to be able to linearly predict the future. \nThis experiment is a part of our investigation of how the uncovered hidden structures relate to the known parameters of the sequences, and how well the learned representation can be used to predict those parameters. \nWe chose SimCLR and CPC as baseline methods simply because they are de-facto standards of deep unsupervised representation-learning methods. \nFollowing the convention in the standard evaluation of representation learning, we also added the result of regressing the digit classes for Sequential MNIST/MNIST-bg in the revision (Figure 9). We can confirm in all results that our method performs competitively (and much better than SimCLR and CPC), and that the learned structures are useful in the downstream task as well.\nIn the revised version of our paper, we updated the explanations for these comparative experiments.\n\n> With noisy background in the image examples, do you find the optimization more difficult? Intuitively it adds challenges to learn the representations for the objects there, but I wonder how harder it becomes in your case, since you assume a purely static background.\n\nWe must say that it depends on the types of noise to be considered. If the background noise has some stationary property, our method possibly learns the dynamics. But if not, we may need some probabilistic models to model the noisy information (Because we have not tried both situations in our paper, we cannot tell how hard it is). Meanwhile, as we report in our paper, the training with ImageNet background (MNIST-bg) did not seem to negatively affect the efficacy of our method. \n\n**References**\t\n\n[1] T. Keller and M. Welling. Topographic vaes learn equivariant capsules. NeurIPS, 2021. \n\n[2] A. Van den Oord, et al. Conditional image generation with pixelcnn decoders. NeurIPS, 2016.\n\n[3] D. Rezende and S. Mohamed. Variational inference with normalizing flows. ICML, 2015.\n\n[4] I. Kobyzev, S. JD Prince, and M. A. Brubaker. Normalizing flows: An introduction and review of current methods. TPAMI, 2020.\n\n[5] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. NeurIPS, 2020.\n\n[6] T. Cohen and M. Welling. Learning the irreducible representations of commutative lie groups. ICML, 2014.\n\n[7] T. Keller and M. Welling. T. Keller and M. Welling. Predictive coding with topographic variational autoencoders. ICCV workshop, 2021\n", " > Can you motivate the ablation study in the experiments? I don’t understand why you are doing it. \n\nBelow we elaborate on the motivation behind each variant of our method used in our ablation study.\nWe added a similar description in Appendix as well. \n\n\n*Fixed block models*\n \nIn the pioneering work of [6] that endeavors to learn the symmetry in a linear system using the representation theory of commutative algebra, the authors hard-code the irreducible representations/block matrices in their model. Our study is distinctive from many applications of representation theory and symmetry learning in that we uncover the symmetry underlying the dataset not by introducing any explicit structure but by simply seeking to improve the prediction performance. We therefore wanted to experiment on how the introduction of the hard-coded symmetry like the one in [6] would affect the prediction performance.\nAs we see in the result, introducing the hard-coded symmetry as an inductive bias does not positively affect the predictive performance, suggesting that, in truth, it can be unnecessary (if not harmful) to use a hard-coded symmetry in order to find the symmetry in the dataset. \n\n*Reconst models* \n\nWe trained this model to show the importance of the use of the validation sequence ${\\bf s_p}$. \nIn our default algorithm, we train our encoder and decoder with the prediction loss $\\mathcal{L}^p$ in eq.(2) over the future horizon of length $T_p-T_c$ . We therefore wanted to verify what would happen to the learned representation when we train the model with the reconstruction loss $\\mathcal{L}^r$ in eq.(3), in which the model predicts the observations contained in the conditional sequence. \nAs we discuss in Section 6, the estimated transition with the Rec. model was inaccurate, possibly because the transition matrix overfitted to the conditional sequence-specific information, and the learned encoder was far from the equivariant map we aim to obtain. Similar results are reported in [7], in which the author showed that the latent structure of the predictive model well captured the cyclic structure of the sequences. We re-confirmed that the prediction of the future frames is important in learning the symmetry behind the sequence.\n\n*Neural*$M^*$ \n\nOur method is *meta* in that we distinguish the internal training of $M^*$ for each sequence from the external training of the encoder $\\Phi$. Put in another way, the internal optimization process of $M^*$ itself is the function of the encoder. To measure how important it is to train the encoder with such a *meta* approach, we evaluated the performance of Neural$M^*$ approach. \nTo reiterate, Neural$M^*$ uses a neural network $M^*_\\theta$ that directly outputs $M^*$ on the conditional sequence, and trains the encoder and decoder via \n\n$ \\sum_{t= T_c+1}^{T_c+T_p} || \\Psi(M_\\theta^* (\\mathbf{s_c} )^{t-T_c} \\Phi(s_{T_c})) - s_t ||_2^2 $ \n\nthereby testing the training framework that is similar to our method *minus* the *meta* component. \nOur experimental results show that the *meta* component is in fact very important in extracting the “global” symmetry that enables the inference of transition operators that acts on all element in the dataset in the same way (i.e., equivariance). \n\n*Neural transition*\n\nOne important inductive bias that we introduce in our model is that we assume the latent transition to be linear. We therefore wanted to test what happens to the results of our experiment if we drop this inductive bias. \nNamely, for $T_c=2$ we trained a 1DCNN that takes in the past two time frames to output the next frame. \nAs we can confirm in our experimental result, the linear inductive bias in the latent space has a positive effect on extrapolating the future.", " Thank you very much for the valuable comments. Below we respond to each concern / question raised in your review. \nPlease see the last part (part3) of this response for the numbered references used in our comments. \n\n> I mainly wonder what we additionally achieved with this proposed method, compared to the existing methods on disentangled representation learning. in my opinion, can also be framed as generating examples by varying factors of variation using models in [1,2,3]. \n\nWould you please let us know how VAE can be used to “predict” the future for any given sequence? As far as we understand, the studies listed in the comment only study $p(x)$ and do not solve the future prediction task.\nOne straightforward application of VAE to the sequential dataset in our paper would be to disregard the sequential relation in the dataset and train the generative model that can mimic the “unordered” version of the dataset while introducing some form of the regularization to identify the independent factor of variations. However, such a generative model does not seem to directly help predict the future of the sequence that requires the explicit form of the transition that includes speed/acceleration. \nThe very recent topographic VAE(TVAE) [1] is one of the works with a similar goal that uses a VAE framework as well; however, unlike our approach, TVAE engineers an explicit “cycle” framework in their model, and using VAE for the prediction task investigated in our study does not seem like a trivial task. \nWe also note that we cannot cast the TVAE directly to compare on the sequential dataset we use in our dataset, because we deal with a set of sequences with varying speeds and different cyclic orders. \n\n\nIndeed, we also do consider it an important future work to develop a framework that can deal with future prediction with uncertainty, modeling $p({\\bf s_p} | {\\bf s_c})$ by leveraging the popular models like VAE, PixelCNN [2], Normalizing flows [3, 4], and recent diffusion probabilistic models [5].\nHowever, we believe that this is not within the scope of our current work, and as such, we do not believe it necessary to use VAE in our problem setting. Also, unlike VAE for which we need to carefully tune the hyperparameter to balance the KL term and the reconstruction term, our method does not require any hyperparameter that we have to cross-validate on each dataset. ", " Thank you very much for your valuable comments. We corrected the typographies and inconsistency of notations in the revision. \n\nAs for the concern regarding the downstream task, we show in the figure 8 in the appendix section the results of linearly regressing the true transition parameters of the sequences from $M^*$. Because we are computing $M^*$ with $T_c =2$ in most of our experiments, the $M^*$s used in this evaluation are regressed from a pair of latent variable; therefore, all the predictors used for Fig 8 are functions of $[\\Phi(s_1), \\Phi(s_2)]$, and we are providing the result of regressing the transition parameters from $[\\Phi(s_1), \\Phi(s_2)]$ (For SimCLR and CPC evaluation in this experiment, we are directly regressing the parameters from $[\\Phi(s_1), \\Phi(s_2)]$ ). \nAlso, in the revision we also added our result of linearly regressing the digit class of $s$ from $\\Phi(s)$, and our representation performs better than the representation learned by SimCLR and contrastive predictive coding (CPC). Please see Figure 9 for more details about the results. \n\nAt the same time, it is generally not straightforward to determine how the representation should be evaluated, and there is no unique method to justify the evaluation methods. It is because we wanted to study the *wellness* of representation from multiple viewpoints that we added the regression results of Fig 8 in addition to the equivariance error, which we believe measures the direct merit of our study by quantifying how well we learned the hidden global symmetry from the local training of each individual sequence. ", " Dear all reviewers,\n\nThank you very much for spending time reviewing our work. As suggested by the reviewers, we fixed the typographies and the notational problems in the revision to improve the clarity of our writing.\nWe respond to the questions and concerns raised by each reviewer in an individual response to each reviewer. \n\nOne main message that we would like to convey in this paper is that we have found a meta-type unsupervised framework that can uncover the symmetry of the dataset when trained to extrapolate the future well on different sequential datasets. \nThe uncovered symmetry takes the form of disentangled features and their property is in strong alignment with the group representation theory. Our method is distinctive from many past studies in that we do not engineer any specific framework to extract the symmetry/disentangled representation; the symmetry emerges naturally from the objective designed solely for the purpose of training a model that can predict the future linearly in the latent space.\nIn this regard, we believe that the value of our study is not very much about the sheer ability of our proposed method to predict the future or to disentangle the features, but about the novel connection between meta-type training, extrapolation, and algebraic symmetry including those pertaining to disentanglement. \n\nAlso, in Figure 6, we updated the results of a 1st-order version of our proposed method; the result in the original manuscript was the outcome of a wrong snapshot model (premature model). We therefore replaced the results with the outcome of the models at the end of training. The replacement, however, does not change the claim of our paper; the second-order model version achieved the best extrapolation performance in this experiment. ", " The paper studies the problem of learning symmetry from sequential data with a certain stationarity property. In particular, from time sequences of length at least three, meta-learning is used to learn representations such that a future observation can be predicted well by a linear transition. It is shown that doing so can tease out the disentangled structure by simultaneous block-diagonalization. Each block then corresponds to a disentangled feature. \n\nThe meta-learning framework is very straightforward: We learn an injective mapping \\phi for each time step, such that a linear transition relates time step t to time step t+1. The loss function involves a decoder that maps the linear transition \\phi_t-1 to \\phi_t. The transition can be parameterized in various complex ways, but as mentioned above only a linear map is used. The inner loop consists of minimizing the prediction loss, while the outer loop updates the encoder. The main assumption is that the velocity/acceleration is preserved within each observation. Experiments on Sequential MNIST, 3D Shapes, and SmallNORB show the efficacy of the proposed method. \n\n\n Strengths:\n- The basic idea of the paper is quite simple: That we can learn some underlying algebraic structure in an unsupervised way when the transitions can be represented linearly in some latent space. It follows the line of work on Slow Feature Analaysis, ICA and more recent work by Keller and Welling, and also seems reminiscent of work on Koopman analysis (although multiple dynamics are considered here). The paper proposes to use meta-learning to predict a future observation and model only linear transitions. The meta-learning setup is also quite simple. \n- The experiments are carried out on reasonable datasets: Sequential MNIST, 3D Shapes and SmallNorb. The overall results and quality of ablation is good. \n- The extension to constant acceleration is also reasonable, as are the discussion of limitations. \n\nWeaknesses: \n- The paper is actually quite hard to read. I think it can be made really good with a smoother presentation. Even though the main idea is quite simple, I often had difficulty following the details and trying to decode what the authors might have meant. \n- The experiments are good in the sense that they cover some good datasets, and have good ablations. However, I have difficulty in placing their significance, given the authors only report measures such as equivariance error. Can we have another downstream task in which such learnt representations are shown to be helpful? If not, is it possible to have some reasonable baselines to get a sense of how good the variants proposed here are? Overall I like the main idea -- it is simple and elegant, but I am not quite sure what to make of the results, and I think the paper could do a better job at presenting them. See above: \n\nMinor comments:\n\nThere are quite a few typos and somewhat awkward sentence constructions. Below are a few examples:\n- Typo, line 16: *in machine learning? \n- Line 20: should be \"has succeeded\"\n- Line 34-35: \"in the way of meta-learning\" -> by meta-learning/by the way of meta-learning/by means of meta-learning\n- Line 44-45: \"There are rich literatures in unsupervised/weakly supervised\" -> There is a rich literature...\n- Line 63: \"Many studies impose algebraic constraints that reflects some form of geometrical assumption.\" -> that reflect some form of..\n- Line 86: identifiebility -> identifiability \n\n Yes, adequately addressed. ", " This paper proposes to discover structural properties of the data by solving a sequential prediction tasks.\nThe idea is that solving the prediction tasks produces equivariant models, which by guarantees from representation theory, means that the learned representations are simultaneously block-diagonalized, with blocks corresponding to disentangled factors.\n\nSpecifically, the paper learns an encoder $\\Phi$, a decoder $\\Psi$, and a matrix-valued function $M$, such that given adjacent observations $s_t, s_{t+1}$, the difference $\\||\\Psi (M\\Phi(s_t)) - s_{t+1}\\||_2^2$ is minimized. The training borrows idea from meta learning, where the algorithm first estimates $M$ based on a part of the sequence (denoted $\\mathbf{s}_c$), and $\\Phi$ is then estimated using this $M$ and the rest of the sequence (denoted $\\mathbf{s}_p$).\n\nEmpirical results show that 1) the proposed method is able to train on only adjacent time frames and extrapolate to unseen steps in the future, and that 2) the learned linear transitions can be simultaneously block-diagonalized, with blocks corresponding to 1 factor of variation. Strengths\n- It's an interesting idea to learn disentangled representations by the simple prediction task.\n- The paper presents a variety of empirical results and is clear on implementation details.\n- The paper provides hypotheses on some empirical results (Sec 6) and is open about the limitations of the proposed methods, e.g. failing to work on non-invertible transitions.\n\nConcerns\n- [Clarified in the response] While inductive biases are necessary as established by prior work, the proposed method seems too tailored to the specific data structure, and it seems unfair to compare with baseline models that do not have such knowledge about the data.\n- [Draft updated] The messages from some empirical results could be made clearer; please see detailed questions below.\n- [Clarified in the response] Please discuss the relation to the line of work about incorporating physics into the prediction model, e.g. PhyDNet (Guen & Thome 20), and disentangled representation learning in videos in general.\n\n=== **Update** ===\n\nThe authors' responses have address most of my concerns. Many of my previous questions were because I didn't get the main point the paper, since I had trouble parsing the paper as I mentioned in comments on clarity.\nThe writing is now improved and my review has been updated accordingly.\n Questions:\n- The choice of $M$ seems to rely on knowledge of the true dynamics. How to obtain such knowledge in general? How robust is the method to \"model mismatch\", e.g. when internal optimization is performed on sequences with noises or with varying velocity? How can the model be extended to handle some amount of model mismatch?\n- Would the choice of $T_c, T_p$ affect the performance? \n - Choosing $T_c=2$ should be sufficient if the data perfectly follows the model assumption (e.g. constant velocity), but this may not be the case when there's model mismatch.\n- Would the proposed method be able to handle multiple objects?\n- Sec 5.2, swapping $M^*$: my understanding is that $M^*$ should depend on the sequence-specific action $g$. However in Fig 4(a), the two sequences clearly have different actions (e.g. the bottom row seems to be doing both color rotation and shape rotation, while the top row only has shape rotation) and different orbits (since the digits are not the same). How could $M^*$ be the same then?\n- Fig 8 in the appendix: line 255 in the main text says that the proposed method gives \"better representations\" but it's unclear how so. Please explain the success/failure modes of the proposed method and the reasons.\n- Line 276, about simultaneously block-diagonalizable: please provide reference for this.\n\n\nClarifications:\n- Line 124: is $\\circ_{latent}$ different from $\\circ$?\n- Line 220: \"digit4 only training\": does it mean that the encoder/decoder have not seen other digits during training?\n- Line 233: what's the reason for splitting $H$ into 8 (and not some other number of) subtensors?\n- Line 250: what does \"fixed block structure\" mean? Does dividing $H$ into 8 subtensors count at fixing the block structure?\n- Line 303: what does \"sufficiently large\" mean for $m$?\n\n\nWriting: overall I find the paper a bit hard to parse, possibly due some inconsistent notations and my own in-familiarity to the topic.\nSome concrete things:\n- Please fix citation [13].\n- Line 115: do you mean members of $S$ (rather than $s$, which is itself a member of $S$)?\n- Line 143: maybe consider a different notion than $M(g, s_1)$, since $M$ has been defined to be a single-argument function from $\\mathcal{G}$ to $\\mathbb{R}^{a \\times a}$.\n- Line 179: should it be \"optimized on $s_p$\"?\n- Inconsistent notations:\n - $H_{+1} H_{+0}^\\dagger$ in eq (4) and $H_2H_1^\\dagger$ in Fig 1.\n - $T_p$ on line 227, $t_p$ on line 245.\n- Line 233 & 234: it would be more consistent to use either 256 or $m$ (but not both) in defining $H, H_k$.\n- The paragraph on line 252: it would be better to briefly explain what \"better representation\" means.\n- Line 289: there seems to be a typo in the definition of $\\hat{M}^*$.\n The paper discusses the limitations of the proposed method, such as failing to work with non-invertible transitions, e.g. ShapeNet dataset.\n\nThe paper also discusses potential societal impact if the method were to employed in real-world sequential prediction problems.", " The authors of this paper investigate the time series models with certain stationary properties, and use the learned symmetry structure for predicting the future in time series. They consider the type of time series where each sequence is generated by initial observation and some fixed transition operator. Then they propose to optimize a homeomorphic function with equivariance property w.r.t. a group of transition operators. In the experiments, they evaluate the proposed approach on sequences generated based on 3 image datasets.The results show that their method can successfully learn transition operators such object rotation, hue rotation, and object translation. By applying these operators to the initial time step, they show that they could predict the future sequence for a few number of time steps. Originality: In my opinion, this is an interesting paper because they try to investigate the equivariance property in stationary time series models. From this perspective this idea is novel, though I believe the problem defined in this paper can be alternatively tackled using existing methods on disentangled representation learning. (I will think of this weakness more as insignificance rather than originality, because in general studying equivariance in time series is a great topic, and it is not reasonable to argue about originality merely because some existing methods can solve the same type of problems.)\n\nSignificance: Regarding disentangled representation learning. I mainly wonder what we additionally achieved with this proposed method, compared to the existing methods on disentangled representation learning. The reason is that, the authors specifically consider time series with fixed transition operators, e.g. rotation, translation, and the goal is to predict the future observations in the sequences. Because of the strong assumption on fixed transition operators, these prediction tasks, in my opinion, can also be framed as generating examples by varying factors of variation using models in [1,2,3]. \nWhile the authors have explained how their work differs from the prior work on disentangled representation learning, I would appreciate it if they do some comparison against some of those methods such as [1,2,3].For both learning disentangled representations and generation tasks, those VAE-based methods will also achieve impressive results in datasets such as MNIST, FashionMNIST, Dsprites. On the other hand, this proposed method can only do point-estimation, while those VAE-based models can characterize the prediction uncertainty in a variational manner. This is my main concern about the significance of this work.\n\nQuality: This paper is well structured in the sense that they provide proof of theory that supports their motivation and sufficient empirical evaluations as well. \n\nClarity: This paper is well written and clearly conveys the idea in the paper.\n\n[1] Chen, Ricky TQ, et al. \"Isolating sources of disentanglement in variational autoencoders.\" Advances in neural information processing systems 31 (2018).\n\n[2] Kim, Hyunjik, and Andriy Mnih. \"Disentangling by factorising.\" International Conference on Machine Learning. PMLR, 2018.\n\n[3] Esmaeili, Babak, et al. \"Structured disentangled representations.\" The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.\n I have some questions:\n\n- Can you motivate the ablation study in the experiments? I don’t understand why you are doing it. \n\n- Regarding the comparison against SimCLR and CPC, I find it a bit hard to understand the experiments and interpret the results. Can you please provide more details?\n\n- With noisy background in the image examples, do you find the optimization more difficult? Intuitively it adds challenges to learn the representations for the objects there, but I wonder how harder it becomes in your case, since you assume a purely static background.\n\nGeneral suggestion:\n\n- I think it is interesting to study the equivariance property in time series in general. On the other hand, I feel it is also important to showcase the robustness / generalization when tackling trajectory prediction tasks. In other words, the point estimations are not that great in many use cases, and we would want to characterize the model uncertainty. So I wonder if you could do some reasoning about the prediction uncertainty in this proposed method.\n To my best knowledge, the authors have comprehensively discussed the limitations of their work and the potential negative impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 2 ]
[ "2MAsWYKXaV9", "RSCNYdOPnIz", "RknCLq83Z6e", "sQLbdUfBBLX", "047ikamlCzD", "_EuDgakG8_g", "itNlZP5Xajg", "vjGtEqfUY9f", "1bvMFKhj01", "TUSCUkKtAt8", "R7oGXWdDe3S_", "RknCLq83Z6e", "FjO0jJwnbpr", "nips_2022_7b7iGkuVqlZ", "nips_2022_7b7iGkuVqlZ", "nips_2022_7b7iGkuVqlZ", "nips_2022_7b7iGkuVqlZ" ]
nips_2022_HxZpawUrv9Q
A Conditional Randomization Test for Sparse Logistic Regression in High-Dimension
Identifying the relevant variables for a classification model with correct confidence levels is a central but difficult task in high-dimension. Despite the core role of sparse logistic regression in statistics and machine learning, it still lacks a good solution for accurate inference in the regime where the number of features $p$ is as large as or larger than the number of samples $n$. Here we tackle this problem by improving the Conditional Randomization Test (CRT). The original CRT algorithm shows promise as a way to output p-values while making few assumptions on the distribution of the test statistics. As it comes with a prohibitive computational cost even in mildly high-dimensional problems, faster solutions based on distillation have been proposed. Yet, they rely on unrealistic hypotheses and result in low-power solutions. To improve this, we propose \emph{CRT-logit}, an algorithm that combines a variable-distillation step and a decorrelation step that takes into account the geometry of $\ell_1$-penalized logistic regression problem. We provide a theoretical analysis of this procedure, and demonstrate its effectiveness on simulations, along with experiments on large-scale brain-imaging and genomics datasets.
Accept
The decision is to accept this paper. The paper presents a method for producing asymptotically valid p-values when testing the null hypothesis of conditional randomization tests in sparse logistic regression. The method builds on a previous distillation method that examines correlations between residuals for the label y and the focal covariate x_j when they are projected onto the remaining covariates. The method corrects a bias that arises in this distillation method due to the non-linearity in penalized logistic regression. The authors prove the asymptotic validity of the resulting p-values and study the power and FDR of the procedure. The reviewers agreed that this is a strong method and a clearly written paper. The authors answered all major questions from the reviewers and made changes in response to reviewer feedback.
train
[ "qWBbw1_Vu8S", "5uXu-7xf91k", "mwVvFUdjId", "f657JpJRTtR", "JV-HKgaETT", "fPyMyHEfXop", "rF44lTztoV8", "Rj2hJ15u7f", "O07w4mb9fmJ", "TV-tavsEhpU", "8O1MMZ4dv7c", "TDLn0erASxA", "aoGxfP2HC_b" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you to the authors for provided detailed answers to all of the reviewers and the revised manuscript. I have no further questions, and my concerns were addressed.", " I thank the authors for their responses and for editing the paper. I have no further questions at present.", " Thank you for the through response and the interesting paper. I've increased my score to 7 based on the response and the revised manuscript, which fully addressed my primary comments.", " > The notation in appendix A was a bit confusing to me. In particular, in\n> several places (e.g., the definition of $v^*$) a vector is defined which only\n> makes sense later under the assumption that $j=1$. For example $v^*$ is\n> defined as (1, $-w^{0, j}$, but then $v^{*T}\\nabla\\ell(\\beta)$ only makes\n> sense if the 1 in lines up with $j$. This is obviously not a big problem and\n> doing something more \"correct\" might even be notationally clunky enough to\n> not be worth it, but at the very least a quick explanation of the abuse of\n> notation would be helpful.\n\nThis is a good remark, and indeed we have made an abuse of the notation. As the\nreviewer pointed out, we used the fact that permutation invariant of the\nvariable index in $\\hat{v}$ or $v^*$ and correspondingly with\n$\\nabla\\ell(\\beta)$ does not change the value of $v^T\\nabla\\ell(\\beta)$, and we have added this reasoning to the Revised Appendix as Remark A.1.\n\n> I may be missing something, but is some term missing from the first line in\n> the display after line 415 (same for the first line in the display after line\n> 417). Both of these rely on lemma A.4 and so I believe they should also pick\n> up a term that involves $(s^* \\vee s') \\log(p) / n$. This obviously does not\n> affect the result (it gets absorbed into the constant hidden by the\n> $\\precsim$ notation at the bottom of each display) but it did make it\n> difficult to follow the argument.\n\nThe reviewer is correct: there is missing term $(s^* \\vee s') \\log(p) / n$\n(from Lemma A.4), we have updated this in the revised Appendix.\n\n> Along these lines, is Lemma A.4 missing an absolute value sign on the right\n> hand side of both equations in the displace below line 408?\n\nIndeed, the absolute value sign is missing on the LHS of the lemma -- we fixed\nthis in the revised Appendix.\n\n> The results in Figure 5 suggest that it may be beneficial to continue\n> increasing \\lambda beyond the range of values considered in the figure. In\n> most rows, the power continues to increase as \\lambda/\\lambda_{univ}\n> increases up to the final value displayed; additionally, the FDR remains\n> within estimation error of being well-controlled. That is, with 100\n> simulations and a true FDR of 0.1, one might expect a 95% confidence interval\n> to be estimated_FDR +/- 1.96 * sqrt(0.1 * 0.9 / 100), which is ~0.06 so any\n> FDR below 0.106 should be tolerable. The authors should consider continuing\n> to increase lambda to see if further power can be gained or if there is a\n> point at which the regularization is too strong.\n\nFollowing the reviewer's suggestion, we reran the experiment with wider value\ngrid for the regularization parameter $\\lambda$ (up to\n$log_{10}(\\lambda/\\lambda_{univ}) = 5.0$), but found no significant increase\nin terms of performance on avg. statistical power vs. FDR control (figure 5 updated). A slight\ncaveat is that due to the time constraint and the large number of\nhyperparameter grids of this experiment, we only reran this experiment with 50\nsimulations (on fixed p=400; 6 different n_samples; 17 different lambdas).\n\nInterestingly, this did not reveal new behavior, confirming that the main effect of lambda is a transition from a low-power to high-power situation, that saturates yet not exceeds the nominal error rate.\n\n> In appendix G it's stated that the same clustering as 4.3 is used to reduce\n> the dimension. In F.1 its stated that the clustering preserves the spatial\n> structure of the data. Is that also the case for the genes in appendix G?\n> There is some clustering of genes by function in the genome, but overall not\n> really so I'm not sure that it makes sense to include any spatial component\n> here. \n\nIt is true that the way we wrote in the preprocessing step of the genomics data\nmakes it confusing: it is only the same as in brain-imaging case as to reduce\nthe effective dimension. While we still use clustering to reduce the dimension,\nwe use different criterion to merge variables (genes) to clusters of variables,\nwhich is pairwise Linkage Disequilibrium, following [ADNRV19, Section 4] (with\navailable public R library). We have added this elaboration to the revised\nedition of the manuscript.\n\n> Furthermore, if the genes are clustered then why does Table 3 say n=1026 and\n> p=24776, doesn't this contradict that the genes were clustered to 1000\n> clusters (p=1000)?\n\nThis is a typo by our part, we fixed it in the revision of the manuscript.\n\n> Finally, if the genes were clustered, then how are individual genes pulled\n> out in Table 3? Do all genes in a significant cluster get put in the table?\n\nIt is indeed the case here: we make inference on 1000 clusters, then all genes\n\n\n[ADNRV19]: Ambroise, C., Dehman, A., Neuvial, P., Rigaill, G., & Vialaneix,\nN. (2019). Adjacency-constrained hierarchical clustering of a band similarity\nmatrix with application to genomics. Algorithms for Molecular Biology, 14(1),\n1-14.\n", " We thank the reviewer for a very detailed comments and spotting\ntypos/incoherent notations in the manuscript.\n\n> One minor weakness is that the clarity of the manuscript could be improved in\n> some places. It is quite dense (which is understandable due to space\n> constraints). While some parts of the paper have really nice intuitive\n> explanations, other parts do not (e.g., equations 4 and 5). Furthermore, the\n> notation seems to be shifting throughout (see some of my comments in the\n> \"Questions\" section) which can make it somewhat difficult to follow.\n\nWe agree with the comment of the reviewer. We have made changes on the revsions of the paper, and give some more details on the intuition of Eq(4) and (5), in particular the test statistics for dCRT is just a measure of Pearson correlation between regression residuals, scaled by a factor of $\\sqrt{n}$ (elaboration put under Eq.(5)).\n\nWe also fixed the incoherence in notations, especially in the proof in the Appendix following the reviewer's suggestions. \n\n> Figure 2 is interesting, but the asymptotics presented here -- and the really\n> interesting case -- is considering what happens as both n and p get large. A\n> very small simulation study to show that the asymptotic results hold even for\n> p >> n would, in my opinion, strengthen the work. Furthermore, in both the\n> brain processing application and the TCGA application the data is\n> preprocessed so that p ~ n. Is this because the p^3 scaling makes it\n> infeasible to run on the full dataset, or are there additional issues with $p \\gg n$?\n\nIndeed, the reviewer makes a good remark on the computational reasoning: the\nruntime with $\\mathcal{O}(p^3)$ makes it infeasible to run on the full dataset,\nwith $p=200,000$ on the brain-imaging and $p=24766$ on genomics dataset.\n\nFollowing this argument, and to further answer the question of the reviewer on\nthe statistical power, we have added in the Appendix (Section I) an extra simulation\nscenario with the same settings as in Section 4.1 and 4.1, but with fixed\nnumber of samples $n=400$ and varying dimension\n$p=(200,400,800,1600,3200,6400)$. The result shows that the when $p \\gg n$, at\nthe level of controlling FDR=0.1, the Average Power (on 100 simulations)\ndecreasing to 0.0 when p reaches 1600. We believe this is due to the fact that\nin these scenarii, the sparsity estimator used to estimated the logistic\nregression, and for the decorrelation step cannot perform well due to very\nlimited number of data points provided.\n\nMore simply, on top of algorithmic considerations, it is obvious that the conditional inference problem only becomes harder when p increases. This is also why we has to resort to dimension reduction procedures in the experiments on real data.\n\n\n> There is no discussion of finite sample guarantees. e.g., Knockoffs control\n> the FDR for finite sample sizes, whereas the present method only has such\n> guarantees in the asymptotic regime. This is not a big deal, but it would be\n> good to mention this somewhere (maybe in the related works section).\n> \n\nIndeed, there is no finite sample guarantees with the proposed approach.\nWe agree with the reviewer on these two remarks, and have added the lack of finite-sample guarantee on Remark 3.1.\n\n> In lines 171-174 the mention of a variable-screening step makes it sound like\n> a somewhat heuristic procedure, when, in fact, it just leverages that it is\n> trivial to compute p-values for j's where is zero.\n> \n\n> In Theorem 3.2, is $p$ overloaded as both the number of features in the\n> design matrix and also the number of tests? If separate they should not both\n> be labeled $p$. If they are the same, then isn't the theorem trivial by the\n> assumption of being fixed (eventually and it is essentially just standard\n> logistic regression)\n> \n\nWe indeed assumed the number of tests is the same as the number of variables,\nwhich means they are both label $p$ purposefully. We will add a remark on this\nin the revised version. As the second part of the remark, we adjust the Theorem\n3.2 to become a Corollary of Theorem 3.1, to reflect that it is a consequence.\n\n> Remark 4.2 feels important and should be emphasized a bit more in my opinion\n> -- the procedure used in the empirical tests is not proven to be valid by the\n> analysis, but it seems to work in practice.\n\nIndeed, this is correctly pointed out by the reviewer. We have also added a mentioning of this remark in the Discussion section.\n\nWe remark that we are not able to prove independence/PRDS but BH still seems to\nwork well; intuitively, the conditional test exhausts dependency with other\nvariables $-j$ on each of the $j$, so the procedure may achieve\nquasi-dependence, but we were not able to rigorously\nprove this yet, and this might be a good future direction in terms of\ntheoretical perspective of the procedure.\n", " We thank the reviewer for his/her remarks. Followings are our answers to the\nquestions.\n\n> Limited to only looking at logistic regression - can this extend to other\n> models?\n\n> Can this method be generalized to work on a family class of models in high\n> dimensions, e.g. any generalized linear model (GLM)?\n\nIndeed, the reviewer is correct that extension of this work to some families of\ngeneralized linear model proceeds naturally, as it is natively supported by the analysis in [NL17]. We have added a brief remark in the discussion area for this perspective.\n\n> The authors should emphasize their novelty and contribution as more than just\n> an extension of CRT.\n\nWe agree with the reviewer. In fact, in the contribution paragraph in Section\n1, we noted that we adapt the dCRT for the classification case (under logistic\nrelationship), while improving on the prohibitive computational cost of the\noriginal CRT.\n\n> Why do we need the noise term in Eqn. 12? \n\nThe noise term is used to increase the difficulty of the logistic regression\nproblem -- we basically introduced an SNR parameter that dictates the\nsignal-to-noise ratio of the simulated dataset. We also have pointed out in the\nremark below that the noise term can be understood as measurement error in data\ncollection for realistic scenario. Technically, it results in the model being mis-specified, but the approach followed is robust to this mis-specification (see l.229-231).\n\n> What happens in the other case when n >> p?\n\nAs Theorem 3.1 and 3.2 pointed out, when $n \\gg p$, we will have the test\nstatistics with empirical distribution almost perfectly similar to standard\nnormal distribution; and so we no longer need the correction for the test-statistics. \n\n[NL17]: Ning, Y., & Liu, H. (2017). A general theory of hypothesis tests and confidence regions for sparse high dimensional models. The Annals of Statistics, 45(1), 158-195.", " We thank the reviewer for his/her remarks. Followings are our answers to the\nquestions.\n\n> From a theoretical standpoint, more work can be done in explaining the\n> technical novelty in extending the method of dCRT to the setting of\n> classification.\n\nWe agree with the reviewer: we extend the paragraph below equation (5) to state that this equation (which is\nproposed as a test statistics for dCRT) is basically a calculation of\ncorrelation of the regression residuals, then scaled by a factor of\n$\\sqrt{n}$. Hence, the correct distribution of the statistic defined by the formula depends heavily on\nhow well the residuals are estimated. This becomes non-trivial when the\nrelationship between labels and features become non-linear, e.g. in\nclassification setting with logistic relationship.\n\n> Right now the revised estimator, which used the second derivative of the\n> logistic function instead, has minimal intuition for how it is derived, and\n> what makes this problem a technical challenge. \n\nWe believe that the intuition of the novelty of the method have been elaborated\nin the beginning of Section 3: observing that the Fisher score (the gradient of the negative log-likelihood wrt to parameters, i.e. the natural decision statistic for the problem) is biased in the\n$n < p$ setting, we correct that score.\n\n> It is unclear which part of the estimator / results are to do with increasing\n> computational efficiency vs. novel statistical results for the\n> high-dimensional classification setting. This also makes it hard to tease out\n> how to evaluate the estimator and the results.\n\nRegrding the statistical efficiency of the decorrelating statistics, we argued\nin Eq.(7) that the second term on the right-hand side is not negligible in the\n$n < p$ case, and therefore we try to cancel the effect of this term by\nsubtracting an empirical esimation of it. \n\nRegarding computational efficiency, we do not claim that our method is faster than the original dCRT. In fact, the iteration\ncomplexity of the two methods should be the same, as stated in the Appendix.\nThe benchmarked runtime of Table 1 in the main text also supports this\nanalysis. We also make a remark on the computational cost of CRT-logit in\nthe Discussion, and note that it is one of the main limitations of\nCRT-logit. This can be partially addressed by running the computation\n for each variable in parallel.\n\n> What does the notation $y_i (X^T_{I, -j} \\beta)$ mean? Is that an argument to\n> $y_i()$ or multipled by $y_i$? For example see (4)\n\nWe agree that this kind of notation might be confusing: what we meant is the\nlatter case. To avoid this minor confusion we swap the case, and use $(X^T_{I, -j}\n\\beta) y_i $ instead in the revision of the paper.\n\n> For the comparison with dCRT in Figure 1, how does dCRT perform when not\n> using the original features, but a dictionary mapping of the original\n> features (e.g. low-degree polynomial)? \n\nWe agree with the reviewer that performing adequate dimension reduction is essential in problems with $n \\ll p$. \nThis is actually what we do in the brain imaging and geentic experiments, where we consider clustering-based dimension reduction.\nThe only caveat is that one should be aware on what objects precisely the null hypothesis is rejected: not original features, but combinations thereof (dictionary elements, clusters etc.). We make the motivation for such reduction more explicit in the revised version (appendix I), by showing that the approach becomes powerless whenever $n \\ll p$.\n\nCould the reviewer elaborate on the dictionary mapping with low-degree polynomial? As far as we are aware, when we include a fitted model with low-degree polynomial, the number of parameters will actually increase, making the dimension issue actually worse ?\n\n> This might be a fairer comparison as it is quite obvious that just using the\n> residuals from doing linear regression on the original features will lead to\n> a poor for fit for a logistic function.\n\nWe agree that it can be obvious, but we think that this is the whole motivation\nof the manuscript: adapting the obviously biased estimation of the dCRT to the\ncase of non-linear relationship.\n\n> Is the result in Theorem 3.1 assuming that the solve_scaled_lasso_cv step works perfectly? \n\nIndeed this is the case, and we reflect it in Assumption 3.1-(A2), which is used for Lemma A.3 (that states the bound for estimation error of the scaled lasso) in the Supplementary Material.\n\n> The result in Theorem 3.2 is for the expected FDR. Can that result be used as\n> is in Theorem 3.1? I don’t see how.\n\nPerhaps there is a typo in the reviewer comment, as you would mean expected FDP\n(since the FDR=E[FDP]) in asymptotic case? In fact, in our proof for Theorem\n3.2, we need to use the asymptotic results in Theorem 3.1. We have added a discussion (as pointed out by reviewer np9G) that we have not found a finite-sample guarantee for the theoretical analysis of the decorrelated test-score, and will leave it for future work.\n", " We thank the reviewer for his/her remarks. Followings are our answers to the\nquestions.\n\n> CRT-logit is proposed for problems in which $n > p$, but the authors give an\n> asymptotic ($n \\to \\infty$) analysis of its performance in Theorem 3.1. At a\n> high level, I am confused by this meta structure in the paper. Do the authors\n> have a sense for how quickly (in $n$) the p-values supplied by the algorithm\n> become ‘good’ in some sense?\n> \n\nIndeed, due to the lack of space, we have not provided a more detailed comment\non the speed of convergence in the main text. We put it in the proof of Theorem\n3.1, which shows that the p-values \"becomes good\" at the rate\n$\\mathcal{O}(1 / \\sqrt{n})$. \n\nWe agree that it is better to put this in the main text, and have updated it\naccordingly under Remark 3.1.\n\n> What is the utility of adding the extra assumption of sub-exponential tail\n> behavior only in the appendix (I am comparing Assumption A.1, (3) to\n> Assumption 3.1, (3))?\n> \n\nIt is true that it might get confusing to compare the assumption A.1 and 3.1 in the main text: the sub-exponential tail behavior is needed for our proof of Theorem 3.1. We noted in Remark A.1 (the beginning of SupMat file) that this is purely for theoretical analysis only, and will not affect experimental results section. This is a quite standard requirement for this type of analysis.\n\nWe have merged these two assumptions in the main text with the revised version, in particular, we added the sub-exponential to Assumption 3.1 in the main text.\n\n> The ‘description paragraph’ in section 4.3 could be made clearer. In\n> particular, I suggest the authors simply state that they are working with\n> fMRI data and that the goal of the analysis is to identify voxels with\n> task-related levels of activity. As written, the type of data and the goal of\n> the analysis are unclear.\n\nWe agree with the reviewer comment, and added this sentence in the Description paragraph of Section\n4.3 following the suggestion.\n", " We thank the reviewers for their efforts to provide very detailed comments and\nsuggestions. We take into account the comments on typos of the manuscript, and\nwe have taken these into account with the revised version. A slight caveat is\nthat at the moment, due to the strict 9-pages constraint of the revised-edition, we cannot add all of the suggestions/remarks inside the\nrevised version of the main text yet, but we promise to do that in the future\niteration of the manuscript.\n\nWe have provided dedicated answers to reviewers accordingly, following each of\ntheir reviews.", " The authors study the problem of doing inference on a logistic regression model with an L1 penalty in high dimensions, where the number of features in the dataset is at least as large as the size of the dataset. For this problem, they develop a variant (that they call CRT-logit) of the distilled conditional randomization test (dCRT) with higher power than the latter. Their innovation is in introducing a decorrelation step that brings the null distribution of the test statistic closer its assumed distribution - a standard normal. An asymptotic analysis of the performance of CRT-logit is given. I think this is a very nice paper. It focuses on an important and ubiquitous inference problem, and is generally of a high quality, largely due to its clarity and thoroughness. The central technical innovation, namely, the decorrelation procedure discussed in Eqs. 8-11, is well-motivated both analytically and in Figure 1. The new algorithm that is introduced and tested, CRT-logit, will be of interest to the broader machine learning community. Its relationship to existing algorithms is discussed; a number of experiments comparing it with existing algorithms are also presented. CRT-logit is proposed for problems in which $p>n$, but the authors give an asymptotic ($n\\rightarrow\\infty$) analysis of its performance in Theorem 3.1. At a high level, I am confused by this meta structure in the paper. Do the authors have a sense for how quickly (in $n$) the p-values supplied by the algorithm become ‘good’ in some sense?\n\nWhat is the utility of adding the extra assumption of sub-exponential tail behavior only in the appendix (I am comparing Assumption A.1, (3) to Assumption 3.1, (3))?\n\nThere is a typo in line 93. $x$ should be $x_n$.\n\nThe ‘description paragraph’ in section 4.3 could be made clearer. In particular, I suggest the authors simply state that they are working with fMRI data and that the goal of the analysis is to identify voxels with task-related levels of activity. As written, the type of data and the goal of the analysis are unclear. Yes.", " This paper aims to extend the growing literature on identifying relevant features in a machine learning model. The authors focus on the setting of classification in a high-dimensional setting where the number of relevant features is sparse (i.e., less than n^{1/2}), where n is the number of measurements and p is the number of features. To do so, they extend the algorithm of distilled conditional randomization test (dCRT), which itself is an extension of the conditional randomization test (CRT) to make CRT more computationally feasible. \n\nA key step of the dCRT algorithm is to take residuals from two regressions (one regression to see if feature j lies in the linear span of the other features, and the second regression to see if the response variable lies in the linear space of the other features) and use that to compute the test statistic. In essence the key innovation of this paper is to extend the way the residuals are taken to fit a logistic model rather than a linear model. This estimator is called CRT-logic.\n\nGiven this new test statistic and an assumption of sparsity along with other regularity conditions, they prove pointwise gaussian approximation and asymptotic validity of the CRT-logit estimator. They then corroborate their results with simulations and real-world data.\n Strengths \n\n-\tThe paper tackles an important problem of understanding feature relevance in a high-dimensional classification setting with sparse features. \n-\tThe empirical results support their claim\n-\tThe paper is relatively straightforward to parse through.\n\n\nWeaknesses\n\n-\tFrom a theoretical standpoint, more work can be done in explaining the technical novelty in extending the method of dCRT to the setting of classification. Right now the revised estimator, which used the second derivative of the logistic function instead, has minimal intuition for how it is derived, and what makes this problem a technical challenge. \n-\tIt is unclear which part of the estimator / results are to do with increasing computational efficiency vs. novel statistical results for the high-dimensional classification setting. This also makes it hard to tease out how to evaluate the estimator and the results. \n 1.\tWhat does the notation y_i(X^T_{I, -j} \\beta) mean? Is that an argument to y_i() or multipled by y_i? For example see (4)\n2.\tFor the comparison with dCRT in Figure 1, how does dCRT perform when not using the original features, but a dictionary mapping of the original features (e.g. low-degree polynomial)? This might be a fairer comparison as it is quite obvious that just using the residuals from doing linear regression on the original features will lead to a poor for fit for a logistic function.\n3.\tIs the result in Theorem 3.1 assuming that the solve_scaled_lasso_cv step works perfectly? The result in Theorem 3.2 is for the expected FDR. Can that result be used as is in Theorem 3.1? I don’t see how.\n N/A", " The authors study the case of high-dimensional logistic regression when the number of features p is much greater than the number of sample n. They propose the CRT-logit algorithm that combines a variable-distillation step and decorrelation step to keep the sparsity in l1-penalized logistic regression. The authors provide theoretical analysis of their approach and show it is effectiveness on simulations and experiments on real-world brain-imaging and genomics data. The main contribution is the proposal of the CRT-logit method which is shown to be effective and inference cost not prohibitively high. Strengths\n- The authors propose CRTlogit and provide thorough theoretical and experimental results\n- Highly relevant problem since logistic regression is still one of the most commonly used methods as well as l1-penalty in machine and deep learning\n- Valid and thorough theoretical results\n- Show empirical validation of theoretical results\n- Also show experiments on average inference runtime in addition to performance results\n- Tests on real-world brain and genomics data.\n- Very nice qualitative plots e.g. Figure 1 showing the improvement on the theoretical quantiles of the proposed CRT-Logit over dCRT.\n\nWeaknesses\n- Some of the derivations in Eqn, 8-11 can be moved to the appendix.\n- Limited to only looking at logistic regression - can this extend to other models?\n- The authors should emphasize their novelty and contribution as more than just an extension of CRT.\n 1. Why do we need the noise term in Eqn. 12?\n2. Can this method be generalized to work on a family class of models in high dimensions, e.g. any generalized linear model (GLM)?\n3. What happens in the other case when n >> p? Yes the authors have addressed the limitations of their work.", " This paper presents a method for performing hypothesis testing for lasso-penalized logistic regression in the high dimensional (but sparse) setting. The main result relies on the asymptotic normality of a test statistic that is essentially how correlated a given feature is The proposed approach is interesting and theoretically well-motivated. That it gives (asymptotically) valid p-values is a compelling advantage over the knockoff framework that only allows for FDR control. The asymptotics are also nice because then one can avoid needing to do any resampling. One aspect that I found missing from the paper is a discussion of (the lack of) finite-sample guarantees. The lack of finite sample guarantees is not a concern to me, but given that that's one of the compelling aspects of some of the related works (e.g., knockoffs) it would be good to at least discuss.\n\nOne minor weakness is that the clarity of the manuscript could be improved in some places. It is quite dense (which is understandable due to space constraints). While some parts of the paper have really nice intuitive explanations, other parts do not (e.g., equations 4 and 5). Furthermore, the notation seems to be shifting throughout (see some of my comments in the \"Questions\" section) which can make it somewhat difficult to follow. Comments / questions:\n\n* Figure 2 is interesting, but the asymptotics presented here -- and the really interesting case -- is considering what happens as both n and p get large. A very small simulation study to show that the asymptotic results hold even for p >> n would, in my opinion, strengthen the work. Furthermore, in both the brain processing application and the TCGA application the data is preprocessed so that p ~ n. Is this because the p^3 scaling makes it infeasible to run on the full dataset, or are there additional issues with p >> n?\n\n* There is no discussion of finite sample guarantees. e.g., Knockoffs control the FDR for finite sample sizes, whereas the present method only has such guarantees in the asymptotic regime. This is not a big deal, but it would be good to mention this somewhere (maybe in the related works section).\n\n\n\nMinor Comments / questions :\n* The switch from $\\hat{w}^j$ to $\\hat{\\beta}^{d_{x_{*,j}}$ from equations (9) to (10) is a bit notationally confusing. Adding a sentence beforehand would help with the transition.\n\n* In lines 171-174 the mention of a variable-screening step makes it sound like a somewhat heuristic procedure, when, in fact, it just leverages that it is trivial to compute p-values for j's where $\\hat{beta}_j$ is zero.\n\n* It would be nice to be notationally consistent between Assumption 3.1 and Assumption A.1. (e.g., $\\kappa^2$ becomes $K$ and $K$ becomes $K'$)\n\t\n* In Theorem 3.2, is p overloaded as both the number of features in the design matrix and also the number of tests? If separate they should not both be labeled $p$. If they are the same, then isn't the theorem trivial by the assumption of $p$ being fixed (eventually $n \\gg p$ and it is essentially just standard logistic regression)\n\n* Remark 4.2 feels important and should be emphasized a bit more in my opinion -- the procedure used in the empirical tests is not proven to be valid by the analysis, but it seems to work in practice.\n\n* In Table 1, the boldface should be removed. The standard usage of boldface in these types of tables is to highlight the fastest method. The parenthetical \"(this work)\" is already sufficient to draw the reader's attention to CRT-logit\n\n* The notation in appendix A was a bit confusing to me. In particular, in several places (e.g., the definition of $v^*$) a vector is defined which only makes sense later under the assumption that $j=1$. For example $v^*$ is defined as ($1, -w^{0, j})$, but then ${v^*}^\\top \\nabla \\ell(\\beta)$ only makes sense if the 1 in $v^*$ lines up with $j$. This is obviously not a big problem and doing something more \"correct\" might even be notationally clunky enough to not be worth it, but at the very least a quick explanation of the abuse of notation would be helpful.\n\n* $\\hat{v}$ has not yet been defined by Lemma A.4.\n\n* $v^*$ as defined on line 404 and defined (again) on line 414 have different signs for the $w^{0, j}$ part.\n\n* I may be missing something, but is some term missing from the first line in the display after line 415 (same for the first line in the display after line 417). Both of these rely on lemma A.4 and so I believe they should also pick up a term that involves $s^* \\vee s' \\log p / n$. This obviously does not affect the result (it gets absorbed into the constant hidden by the $\\preceq$ notation at the bottom of each display) but it did make it difficult to follow the argument. Along these lines, is Lemma A.4 missing an absolute value sign on the right hand side of both equations in the displace below line 408?\n\n* The results in Figure 5 suggest that it may be beneficial to continue increasing \\lambda beyond the range of values considered in the figure. In most rows, the power continues to increase as \\lambda/\\lambda_{univ} increases up to the final value displayed; additionally, the FDR remains within estimation error of being well-controlled. That is, with 100 simulations and a true FDR of 0.1, one might expect a 95% confidence interval to be estimated_FDR +/- 1.96 * sqrt(0.1 * 0.9 / 100), which is ~0.06 so any FDR below 0.106 should be tolerable. The authors should consider continuing to increase lambda to see if further power can be gained or if there is a point at which the regularization is too strong.\n\n* In appendix G it's stated that the same clustering as 4.3 is used to reduce the dimension. In F.1 its stated that the clustering preserves the spatial structure of the data. Is that also the case for the genes in appendix G? There is some clustering of genes by function in the genome, but overall not really so I'm not sure that it makes sense to include any spatial component here. Furthermore, if the genes are clustered then why does Table 3 say n=1026 and p=24776, doesn't this contradict that the genes were clustered to 1000 clusters (p=1000)? Finally, if the genes were clustered, then how are individual genes pulled out in Table 3? Do all genes in a significant cluster get put in the table?\n\n\t\n\n\t\n\nTypos:\n* Line 13 \"the geometry of ell_1\" --> \"the geometry of the ell_1\"\n* Line 29 \"conditionally to\" sounds off to me. Perhaps \"conditioned on\"?\n* Lines 37-39 are difficult to parse grammatically\n* Lines 54-55 \"convergence to Gaussian\"\n* Lines 64-65: \"standard normal test-statistic\" --> either \"a standard normal test-statistic\" or \"standard normal test-statistics\"\n* Line 65 \"in large-sample regime\" --> \"in the large-sample regime\"\n* Figure 1 caption: \"The empirical distribution of dCRT null-statistic\" --> \"The empirical distribution of the dCRT null-statistic\"\n* Line 71: \"multiple sampling\" --> \"multiple samplings\"\n* Line 74: \"hence inherently\" --> \"and hence inherently\"\n* Line 169: \"where the formula for empirical\" --> \"where the formula for the empirical\"\n* Line 173-174: \"We provide empirical benchmark of runtime\" --> \"We provide an empirical benchmark of the runtime\"\n* Line 176: \"inline\" --> \"in line\"\n* Line 198: \"using Benjamini-Yekutieli\" --> \"using the Benjamini-Yekutieli\"\n* Line 215: \"implementation are\" --> \"implementation is\"\n* Line 253: \"at default value\" --> \"at a default value\"\n* Lines 253-254: \"using Benjamini-Hochberg procedure\" --> \"using the Benhamini-Hochberg procedure\"\n* Line 254: \"Results in Figure 3\" --> \"The results in Figure 3\"\n* Line 310: \"analyze two sample image\" --> \"analyze two sample images\"\n* Line 315: \"is in emotion task\" --> \"is in the emotion task\"\n* Line 315: \"failing to control FDR\" --> \"failing to control the FDR\"\n* Line 316: \"at nominal level\" --> \"at the nominal level\"\n* Line 338: \"We note that there exists\" --> \"We note that there exist\"\n* Line 404: \"under logistic model\" --> \"under the logistic model\"\n* Line 405: \"under logistic model\" --> \"under the logistic model\"\n* Line 413: \"written in more general\" --> \"written in a more general\"\n* Line 415: \"where we use triangle inequality\" --> \"where we use the triangle inequality\"\n* Line 416: \"last inequality us due\" --> \"last inequality is due\"\n* Line 456: \"from inference algorithm\" --> \"from an inference algorithm\"\n* Display below line 458: \\hat{k}_{BY} should be \\hat{k}_{BH} I believe the authors have adequately addressed the limitations of their work, and I am not aware of any potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 3 ]
[ "fPyMyHEfXop", "Rj2hJ15u7f", "JV-HKgaETT", "aoGxfP2HC_b", "aoGxfP2HC_b", "TDLn0erASxA", "8O1MMZ4dv7c", "TV-tavsEhpU", "nips_2022_HxZpawUrv9Q", "nips_2022_HxZpawUrv9Q", "nips_2022_HxZpawUrv9Q", "nips_2022_HxZpawUrv9Q", "nips_2022_HxZpawUrv9Q" ]
nips_2022_A6EmxI3_Xc
Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnable Classifier at the End of Deep Neural Network?
Modern deep neural networks for classification usually jointly learn a backbone for representation and a linear classifier to output the logit of each class. A recent study has shown a phenomenon called neural collapse that the within-class means of features and the classifier vectors converge to the vertices of a simplex equiangular tight frame (ETF) at the terminal phase of training on a balanced dataset. Since the ETF geometric structure maximally separates the pair-wise angles of all classes in the classifier, it is natural to raise the question, why do we spend an effort to learn a classifier when we know its optimal geometric structure? In this paper, we study the potential of learning a neural network for classification with the classifier randomly initialized as an ETF and fixed during training. Our analytical work based on the layer-peeled model indicates that the feature learning with a fixed ETF classifier naturally leads to the neural collapse state even when the dataset is imbalanced among classes. We further show that in this case the cross entropy (CE) loss is not necessary and can be replaced by a simple squared loss that shares the same global optimality but enjoys a better convergence property. Our experimental results show that our method is able to bring significant improvements with faster convergence on multiple imbalanced datasets.
Accept
This paper examines the use of a random equiangular tight frame (ETF) as a replacement mechanism for the final classification layer in a deep neural network, and demonstrates experimental advantages in class-imbalanced training scenarios. Reviewers gave drastically different assessments of this paper, with ratings ranging from reject to weak accept. The authors provided extensive responses to all reviewers, and Reviewer amWi participated in an extended discussion with the authors. Author responses directly addressing concerns raised by other reviewers, such as pointing to ImageNet results in response to Reviewer Nj4c asking for such experiments, appear not to have received subsequent engagement from reviewers. The Area Chair has taken an detailed look at the paper and the entirety of the discussion, and agrees with Reviewer amWi's assessment. The work provides an interesting examination of ETF as a novel mechanism to address class imbalanced training; the contributions meet the bar for acceptance to NeurIPS. Reviewer amWi makes several suggestions regarding presentation of the main contributions as well as additional papers for citation and discussion, which the authors may want to take into consideration when preparing the final version of the paper.
train
[ "2OP1pbwoG5D", "1XxQSIEte5p", "uhix49Pfxw", "pA8z1FvEan", "lR-aSKBmFrB", "HlkpLqCDpcQ", "qaIa9S9UJgP", "1Sg0enDxdOL", "VoAj80bKMX", "1I2TB-00DH", "PKGSNw3fOZ", "ZxW__SMyDm7", "c7dDLS0U5n", "xXU6c8maZNR", "2waE7v9a4Mw", "skwJcr04771", "3f5t9GzlfVG", "gNadXxrwkng", "UU7g_LQ9CN3", "HfMzT3AjeHt", "In0NLdUnayg", "HIzVMfDQhM", "w5cpEtCbfT" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Our method performs much better than the weighted CE baseline (71.9 vs 68.5 in 0.005 imbalance ratio), while ArcFace is apprantly worse than the weighted CE baseline (66.3 vs 68.5). Is this a confilct for ArcFace? \n\nOur Theorem 1 mainly focuses on the imbalanced training setting, compared with previous studies on neural collapse that can only deal with balanced training. Requiring our method to be the best on face recognition is as groundless as requiring ArcFace to work in long-tailed classification. Besides, in the three cases (long-tailed classification, general classification, face recognition), our method is competitive in all three cases, and surpasses the face recognition methods in both long-tail classification and general classification. As a comparison, face recognition methods like ArcFace can only work in face recognition. They perform significantly worse in long-tailed classification or general classification. \n\nSo, a loss cannot be always the best regardless of the application task. Requiring our method also to be the best on face recognition is groundless. A theory can be broken only when its proof is wrong. ", " But the empirical data to back those claims is conflicting. The claims work on one dataset but break on another. Moreover, compared to methods which are the closest in theory (I would say methods like ArcFace are more similar to ETF than CE loss), the method shows no improvement, breaking the theory.", " We have reminded for multiple times of our contributions [C1]-[C4]. We never claim that we want to improve an application. Not all studies must target an application and improve its SOTA performance. If our contributions [C1]-[C4] are properly recognized, you will find that **our work is not an application study at all.** ", " All those loss functions target an application like long tail classification, face recognition and show meaningful improvements on well recognized benchmarks for those applications. If the claim here is that we are improving image classification instead of a loss function which makes generic improvements, I would expect improving state of the art for image classification, otherwise if its a generic loss function which works everywhere, then improvements should be applicable across several applications. But this method is neither generic nor does it improve an application (like image classification).", " As stated in your comment, a better loss function would have an improvement in performance regardless of the application task, no matter if the labels are human or tags. We would like to point out that it is a factual error. As a widely adopted practice, people use Focol loss for long-tailed classification, but will not use it for general classification or face recognition. People use ArcFace for face recognition, but will not use it for long-tailed classification or general classification. If your comment were right, ArcFace would have the best performance no matter on face recognition or long-tailed classification. But actually, as shown in the long-tailed experiments in our previous response, our method performs much better than the weighted CE baseline (71.9 vs 68.5 in 0.005 imbalance ratio), while ArcFace is apprantly worse than the weighted CE baseline (66.3 vs 68.5). \n\nSo, your comment that a loss will be better or worse regardless of the application task and your criticism that our method also needs to be better on face recognition are groundless.", " Changing labels to face identities would affect the loss function adversely but somehow it would work better if it is internet tags? Claiming that internet image tags are better suited for this loss function but it somehow does not work for face identities seems like something is not right. If the method really generalizes (for example adding residual connections in CNNs, or using a deeper network vs shallower, less vs more training data), it would not matter if the class labels are human identities or internet tags and we would observe an improvement in performance regardless of the application, especially for something as generic of a module like a loss function! Probably this is just an over tuned method on a couple of datasets (that too on weak baselines even for those datasets) claiming to be a better loss function, when in reality it is not. ", " Results on MegaFace is shown as follows:\n\n| Methods | Id(\\%) | Ver (\\%) |\n|---|---|---| \n|Center Loss |65.49 | 80.14 |\n| Sphere Loss | 72.73 | 85.56 |\n|ArcFace Loss | 81.03 | 96.98 |\n| Our Method | 80.91 | 96.98 |\n\nIt is shown that our method is also applicable to large-scale face recognition. \n\nWe do not think our contributions can be denied just because we do not offer SOTA performance on face recognition. **As stated in our previous responses, our study has no relation to face recognition. We did not claim any contribution related to face recognition. Our method is DIFFERENT from the loss variants in current face recognition studies.** Therefore, whether we surpass the face recognition methods on even challenging large face recognition datasets makes no difference to the contributions of our study. \n\n**We have repeated our contributions [C1]-[C4] for multiple times. But all your comments are only concerned with face recognition. We are obliged to say that our main contributions are totally overlooked.** \n", " LFW is too small to compare as most of these methods would give similar results. A3/A4 are similar on YTF.\n\nMegaface / IJB is where these methods are typically compared where differences are more obvious across methods, Table 6 https://openaccess.thecvf.com/content_CVPR_2019/papers/Deng_ArcFace_Additive_Angular_Margin_Loss_for_Deep_Face_Recognition_CVPR_2019_paper.pdf\n\n", " Sorry for the late results. It takes us some days to re-implement the face recognition methods. We compare our method with these face recognition methods on long-tailed classification, general classification, and face recognition. The results are shown as follows:\n\n(1)\tLong-tailed classification\n\nThe experiments of long-tailed classification are conducted on CIFAR-10 with different imbalance ratios. All models are trained with ResNet-32 backbone under the same setting. \n\n| Imbalance ratio | 0.005 | 0.01 | 0.02 |\n| --- | --- | --- | --- |\n|Learnable Classifier + Weighted CE Loss | 68.5$\\\\pm$0.3 | 73.9$\\\\pm$0.3 |79.3$\\\\pm$0.2|\n|Learnable Classifier + Focal Loss [A7] | 69.9$\\\\pm$0.2 | 75.4$\\\\pm$0.3 | 78.9$\\\\pm$0.2 |\n|Learnable Classifier + CB Loss [A6] | 69.3$\\\\pm$0.4 | 76.0$\\\\pm$0.2 | 79.5$\\\\pm$0.4 | \n| Center Loss [A1] | 66.8$\\\\pm$0.5 | 70.6$\\\\pm$0.2 | 77.9$\\\\pm$0.2 |\n| SphereFace Loss [A3] | 67.4$\\\\pm$0.3 | 73.0$\\\\pm$0.3 | 78.9$\\\\pm$0.4|\n| ArcFace Loss [A4] | 66.3$\\\\pm$0.6 | 71.9$\\\\pm$0.4 | 77.4$\\\\pm$0.4|\n| P2SGrad [A5] | 66.8$\\\\pm$0.3 | 69.8$\\\\pm$0.3 | 76.6$\\\\pm$0.3|\n|ETF Classifier + DR Loss (ours) | 71.9$\\\\pm$0.3 | 76.5$\\\\pm$0.3 | 81.0$\\\\pm$0.2 |\n\n(2)\tGeneral classification \n\nWe perform the general classification using ResNet-50 on ImageNet. The results are shown as follows:\n\n|Methods| Top-1 Accuracy |\n|---|---|\n| Learnable Classifier + CE Loss | 76.21 |\n| Center Loss [A1] | 75.46 | \n| SphereFace Loss [A3] | 75.71 |\n| ArcFace Loss [A4] | 75.70 | \n| P2SGrad [A5] | 75.66 |\n| ETF classifier + DR Loss (ours) | 76.05 |\n\n(3)\tFace recognition\n\nWe use the code base released by ArcFace to conduct experiments of face recognition on LFW and YTF. We train on MS1MV2 and adopt the training setting used in ArcFace. The results are shown as follows:\n\n| Methods | LFW | YTF |\n|--- | --- | --- |\n| Center Loss [A1] | 99.28 | 94.9 |\n| SphereFace [A3] | 99.42 | 95.0 |\n| ArcFace [A4] | 99.83 | 98.0 |\n|P2SGrad [A5] | 99.82 | 97.2 |\n|ETF classifier + DR Loss (ours) | 99.82 | 97.7 | \n\nIt is shown that our method achieves the best in long-tailed classification. In general learning on ImageNet, our method is close to the baseline (Learnable classifier + CE loss), while the face recognition methods have degraded performances than the baseline. In face recognition, it is shown that our method is still competitive with the losses proposed in [A1, A3, A4, A5]. We believe that the performance of our method can be further improved if more engineering work, such as carefully tunning the learning rate, is involved. \n\nPlease note that we **never claim** that “learnable classifier is not advantageous in all cases”. Our contributions have been rigorously stated in C1 – C4 and Lines 78-90 in our paper. We prove that fixing an ETF classifier is advantageous because neural collapse can inherently happen even in **imbalanced training.** So, we mainly focused on long-tailed classification experiments and did not test it on face recognition in our paper. \n\nOur method also works well in general learning and face recognition because it better converges to the neural collapse optimality. As described by neural collapse, the ETF structure corresponds to the maximal pair-wise equiangular separation. It is consistent with the goal of face recognition that collapsing features of each class into a center, while maximizing the distances between centers of different classes. Our method of fixing an ETF classifier directly allocates an optimality and then learns features towards it. Besides, our DR loss has an advantage in convergence over the CE loss (Theorem 2). As stated in Sec4.3, DR loss only has \"pull\" gradient, while the CE loss and all the variants in the face recognition studies [A1, A3, A4, A5] rely on the \"push\" gradient whose direction may be inaccurate. \n\nSo, we think applying our method into face recognition is attractive as a future study. **We will cite these studies and include the results above in our revised paper.** \n\n---\nReference\n\n[A1] A Discriminative Feature Learning Approach for Deep Face Recognition, Wen et. al, ECCV 2016\n\n[A3] SphereFace: Deep Hypersphere Embedding for Face Recognition, Liu et. al, CVPR 2017\n\n[A4] ArcFace: Additive Angular Margin Loss for Deep Face Recognition, Deng et. al, CVPR 2019\n\n[A5] P2SGrad: Refined Gradients for Optimizing Deep Face Models Xiao Zhang, Rui Zhao, Junjie Yan, Mengya Gao, Yu Qiao, Xiaogang Wang, Hongsheng Li; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019\n\n[A6] Class-Balanced Loss Based on Effective Number of Samples, Cui et al., CVPR 2019.\n\n[A7] Focal loss for dense object detection, Lin et al., ICCV 2017. ", " [A3] and [A4] are studies for face recognition. They normalize all classifier prototypes as 1 into a sphere. However, the classifier is still learnable in [A3] and [A4]. Although the ETF structure has equal norms, which means the prototypes are also located on a sphere, we fix the classifier as an ETF. The classifier in our case is not learnable. We prove that neural collapse can inherently happen even in imbalanced training as long as the classifier is fixed as an ETF (Theorem 1 in our paper). So, our methods are different from [A3] and [A4].\n\n\n\nFrom what I am reading, the conclusion of this paper is that not providing flexibility to the vectors in angular loss functions in A3, A4, A5 by keeping them fixed as an ETF is better than them being learnable. If having them learnable is not advantageous, in my opinion it would be good to compare against these loss functions on some of the benchmarks where these methods were tested instead of comparing with CE loss on Imagenet. \n\nA3, A4, A5 are clearly the closest competitors of this ETF based method instead of the CE loss, but these methods have not even been cited, let alone having an experimental comparison.", " + It is not clear why would features collapse to within-class mean? What is the loss function being used to do so? Are class labels even used during training which leads to this equiangular tight frame? The reader needs to be given a high level introduction to ETF. \n\n\nPlease note that we do give the detailed backgrounds of neural collapse, equiangular tight frame (ETF), and layer-peeled model (LPM) in Section 3 in our paper. It covers nearly one page from Line 116 to Line 158. \n\nNeural collapse as an elegant phenomenon is observed by [28]. But [28] does not give the answer to why neural collapse happen (why would features collapse to within-class mean). In later studies, using the LPM analytical tool, neural collapse is proved to be the global optimality of training on a balanced dataset under the CE loss [5, 8, 22, 15, 47] or MSE loss [24, 37], which theoretically explains why neural collapse happens in balanced training (accordingly also explains why would features collapse to within-class mean). So, both CE and MSE losses can induce neural collapse in balanced training, and labels are necessary. The motivation of our study is to induce neural collapse in imbalanced training. As claimed in [C1], as far as we know, we are the first to show that neural collapse can even happen in imbalanced training as long as the learnable classifier is fixed as an ETF\n\n\n----\n\nReferences\n\n[A1] A Discriminative Feature Learning Approach for Deep Face Recognition, Wen et. al, ECCV 2016\n\n[A2] Prototypical networks for few-shot learning, Snell et. al, Neurips 2017\n\n[A3] SphereFace: Deep Hypersphere Embedding for Face Recognition, Liu et. al, CVPR 2017\n\n[A4] ArcFace: Additive Angular Margin Loss for Deep Face Recognition, Deng et. al, CVPR 2019\n\n[A5] P2SGrad: Refined Gradients for Optimizing Deep Face Models, Xiao Zhang et al., CVPR 2019\n\n[5] C. Fang, H. He, Q. Long, and W. J. Su. Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National Academy of Sciences, 118(43), 2021.\n\n[8] F. Graf, C. Hofer, M. Niethammer, and R. Kwitt. Dissecting supervised constrastive learning. In ICML, pages 3821–3830. PMLR, 2021.\n\n[15] W. Ji, Y. Lu, Y. Zhang, Z. Deng, and W. J. Su. An unconstrained layer-peeled perspective on neural collapse. In ICLR, 2022.\n\n[22] J. Lu and S. Steinerberger. Neural collapse with cross-entropy loss. arXiv preprint arXiv:2012.08465, 2020.\n\n[24] D. G. Mixon, H. Parshall, and J. Pi. Neural collapse with unconstrained features. arXiv preprint, arXiv:2011.11619, 2020.\n\n[28] V. Papyan, X. Han, and D. L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40):24652–24663, 2020.\n\n[37] T. Tirer and J. Bruna. Extended unconstrained features model for exploring deep neural collapse. ICML, 2022.\n\n[47] Z. Zhu, T. DING, J. Zhou, X. Li, C. You, J. Sulam, and Q. Qu. A geometric analysis of neural collapse with unconstrained features. In NeurIPS, 2021.\n", " + There are several papers that have evaluated similar loss functions not cited and compared with. \n\nThanks for reminding us of these papers. Indeed, many variants of classifier prototype and loss function have been proposed for application tasks such as long-tailed classification, few-shot learning, face recognition, and maybe domain adaptation. We propose the practice of fixing a learnable classifier as an ETF in order to study the neural collapse optimality in imbalanced training. Although these studies are not related to neural collapse, they are also concerned with classifier and loss function. We will cite these papers and discuss the differences in details. \n\nHere we compare with the methods in references [A1,A2,A3,A4] mentioned by you and also [A5] mentioned by Reviewer amWi. \n\n[A1] is a pioneering work for face recognition. In [A1], the whole loss function is composed of a standard CE loss and a center loss. The center loss is a MSE function that regresses each feature into its corresponding prototype (center). However, the centers are learnable, jointly with the training of backbone and classifier. In contrast, we fix the classifier vectors as an ETF structure, so they are not learnable. The ETF structure corresponds to the largest pair-wise equiangular separation of $K$ vectors in $\\\\mathbb{R}^d$. Our dot-regression (DR) loss regresses the inner product of feature and the fixed classifier vector, instead of the feature itself as the center loss in [A1]. Our DR loss only has “pull” gradient term, while all CE-based losses have both “pull” and “push” gradients. The benefit of DR loss over CE loss has been theoretically proved (Theorem 2 in our paper). So, our methods are different from [A1]. Our contributions [C1]-[C4] have no overlap with [A1].\n\n[A2] is a study for few-shot learning. They train a network on train set, and then extract features on support set. The mean of features in each class is used as the prototype for that class. Each sample in query set is classified by measuring the distance between its feature and each prototype. In contrast, our prototypes are fixed as an ETF, instead of the means of features in each lass. Our proposed DR loss has no similarity to [A2] that uses CE loss for training the backbone and Euclidean distance for classifying query samples. So, our methods are different from [A2]. Our contributions [C1]-[C4] have no overlap with [A2].\n\n\n[A3] and [A4] are studies for face recognition. They normalize all classifier prototypes as 1 into a sphere. However, the classifier is still learnable in [A3] and [A4]. Although the ETF structure has equal $\\\\ell_2$ norms, which means the prototypes are also located on a sphere, we fix the classifier as an ETF. The classifier in our case is not learnable. We prove that neural collapse can inherently happen even in imbalanced training as long as the classifier is fixed as an ETF (Theorem 1 in our paper). So, our methods are different from [A3] and [A4]. Our contributions [C1]-[C4] have no overlap with [A3] and [A4].\n\n\n[A5] is a study for face recognition. They study the cosine softmax loss where both feature and classifier prototype are normalized. They propose a loss function whose gradients have coefficients of similarity instead of probability as in the original CE loss. Still, the classifier is [A5] is learnable, while we fix the classifier as an ETF to induce the neural collapse optimality in imbalanced training. The only similarity is that the gradients of our proposed DR loss also have coefficients of cosine similarity (shown in Eq. (15) in our paper). However, the loss proposed in [A5] still relies on the “push” gradient term, while DR loss only has “pull” gradient term that always directs to the optimality. The benefit of DR loss has been theoretically proved. So, our methods are different from [A5]. Our contributions [C1]-[C4] have no overlap with [A5]. \n\nFinally, we would like to highlight that for any classification problem, the goal is to produce discriminative features with within-variance minimized and between-variance maximized, just as described by the neural collapse phenomenon. So, our method that tries to induce neural collapse in imbalanced training will inevitably share similar spirit with those studies in application areas such as few-shot learning and face recognition. **However,** as far as we know, our method of fixing a classifier as an ETF has only been adopted in [47] (as mentioned in our Related Work section, Lines 111-115). But [47] only tries this practice in experiment to show no harm of performance, while we show its ability to induce neural collapse optimality in imbalanced training. Besides, our method of the DR loss has not been proposed in any prior study. **Therefore, our methods (ETF classifier + DR loss) and our contributions ([C1]-[C4]) should be properly recognized.**\n\n-----\nReferences \n\n[A1, A2, A3, A4, A5] are listed in the next part of response. \n\n", " Dear Reviewer, \n\nThanks for your valuable comments. \n\n\n+ The true contributions of our paper.\n\nWe notice that our paper only scores 1 on “contribution” in your review. We think that you may misunderstand our work and overlook some important contributions. So, we would like to first re-summarize our contributions, and then respond to your questions carefully. \n\n**Note that the aim of our study is NOT to propose an algorithm for some application task, nor to study the classifier prototype itself.** The following contributions are consistent with the ones claimed in our paper (lines 78 - 90), but highlight the overlooked points in more details. \n\n**[C1]** Neural collapse as an elegant phenomenon is observed by [28], and is proved (within the LPM) to be the global optimality of training on a balanced dataset under the CE loss [5, 8, 22, 15, 47] or MSE loss [24, 37], which theoretically explains why neural collapse happens in balanced training. As far as we know, we are the first to show that neural collapse can even happen in imbalanced training as long as the learnable classifier is fixed as an ETF (Theorem 1 and Remark 1 in our paper, proved in Appendix A). \n\n**[C2]** Our theoretical analyses on the gradient of CE loss indicate that: (1) neural collapse will be broken in imbalanced training with a learnable classifier, i.e., classier vectors of minor classes would be close or even merged, due to the imbalanced gradients with respect to the learnable classifier; in contrast, our fixed ETF classifier does not suffer from this dilemma (Remark 2 in our paper); (2) the emergence of neural collapse in balanced training is attributed to the “pull-push” mechanism in the CE loss (Remark 3 in our paper); (3) when the classifier is fixed as an ETF optimality, the \"pull\" gradient is always accurate, and the \"push\" gradient is no longer necessary, which inspires us to develop a new loss function with more accurate gradient. \n\n**[C3]** Inspired by (3) in [C2], we further develop a new loss function with only \"pull\" gradient and the same optimality as the CE loss. It has a better convergence property than the CE loss, which is theoretically proved (Theorem 2 in our paper, proved in Appendix B).\n\n**[C4]** Experiments of long-tail classification on CIFAR-10, CIFAR-100, SVHN, STL, and ImageNet are conducted to verify our theories and theory-inspired methods in [C1]-[C3].\n\nWe know that many variants on classifier prototype and loss function have been proposed for application tasks such as long-tailed classification, few-shot learning, face recognition, and maybe domain adaptation. However, we think our work should not be judged only on what method we use and what performance we achieve. **Our theoretical results including Theorem 1, Remark 1, Remark 2, Remark 3, and Theorem 2, are original, and should not be overlooked.** But we do not receive a feedback on any of them from your review. \n\nWe believe that the contributions, in particular [C1]-[C3], are advancements for the neural collapse area because: \n\n(1) Current neural collapse studies only focus on why neural collapse happen in balanced training [5, 8, 22, 15, 47, 24, 37], while we are the first to show that neural collapse can also be a global optimality in imbalanced training; \n\n(2) Compared with current neural collapse studies, we not only show the neural collapse global optimality, but also inspire a new loss function whose benefit is provable; \n\n(3) There are only empirical experiments in these studies showing the convergence to neural collapse, while we additionally show practical applicability of our neural collapse inspired methods by long-tailed experiments on multiple datasets including ImageNet. \n\n**Based on the statement above, we hope that our contributions can be properly recognized.** \n\n----\nReferences\n\n[5] C. Fang, H. He, Q. Long, and W. J. Su. Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National Academy of Sciences, 118(43), 2021.\n\n[8] F. Graf, C. Hofer, M. Niethammer, and R. Kwitt. Dissecting supervised constrastive learning. In ICML, pages 3821–3830. PMLR, 2021.\n\n[15] W. Ji, Y. Lu, Y. Zhang, Z. Deng, and W. J. Su. An unconstrained layer-peeled perspective on neural collapse. In ICLR, 2022.\n\n[22] J. Lu and S. Steinerberger. Neural collapse with cross-entropy loss. arXiv preprint arXiv:2012.08465, 2020.\n\n[24] D. G. Mixon, H. Parshall, and J. Pi. Neural collapse with unconstrained features. arXiv preprint, arXiv:2011.11619, 2020.\n\n[28] V. Papyan, X. Han, and D. L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 2020.\n\n[37] T. Tirer and J. Bruna. Extended unconstrained features model for exploring deep neural collapse. ICML, 2022.\n\n[47] Z. Zhu, T. DING, J. Zhou, X. Li, C. You, J. Sulam, and Q. Qu. A geometric analysis of neural collapse with unconstrained features. In NeurIPS, 2021.\n\n", " + Is there any specific motivation for the two cases, $d\\\\ge K$ and $d\\\\ge K-1$\n\n$d\\\\ge K-1$ is a necessary condition for $K$ vectors in $\\\\mathbb{R}^d$ to form an ETF structure that satisfies Eq. (2) in our paper. It is the most widely adopted case in neural collapse studies. In [28], a more strict case $d\\\\ge K$ is used to allow the rotation by the partial orthogonal matrix $\\\\mathbf{U}$ such that $\\\\mathbf{U}^T\\\\mathbf{U}=\\\\mathbf{I}$.\n\n+ Important references are missing [A,B]\n\nThanks for reminding us of these works. We will cite the two references and compare our contributions with theirs in details. \n\nIn the cosine softmax loss, the logit is the cosine between feature and classifier prototypes multiplied by a hyperparameter. In the CE loss, the logit is inner product of feature and classifier prototypes. \n\n[A] analyzes the gradients of the cosine softmax loss and proposes a new loss function whose gradients have coefficients of similarity instead of probability as in the original CE loss. \n\nAs a comparison, we conduct theoretical analyses on the gradient of CE loss, instead of the cosine softmax loss in [A]. The gradients of our proposed loss, DR loss in Eq. (15), also have coefficients of cosine similarity. But compared with [A], DR loss only has “pull” gradient with classifier fixed as an optimality, while the loss proposed in [A] still relies on the “push” gradient term. Besides, our loss is proved to have a better convergence property, while [A] does not give a rigorous support. \n\nAs the earlier work than [29], [B] first shows that fixing a learnable classifier as the vertices of regular polytopes, including d-simplex, d-cube, and d-orthoplex, helps to learn stationary and maximally separated features. The differences have been discussed in the previous response to the comparison with [29]. We cited and compared with their extended work [29], but missed [B]. We will cite [B] and highlight the differences in the revised paper. \n\n\n\n+ Is the random ETF initialization needed to avoid possible biases in its choice? It there any evaluation quantifying this dependency?\n\nThere is no need to worry about the bias of a random ETF. What matters is the separation structure of the classifier prototypes, instead of their specific directions. As shown in Table 1 in our paper, we run the same model for multiple times. For each run, a random ETF is produced. The standard deviation is low, which means the performance has a low dependence on the ETF with a specific bias.\n\n\n+ What is the difference between the loss in Eq. (15) and the classic entropy loss in which both features and prototypes are normalized. \n\nWhen feature and prototypes are normalized, the original entropy loss (CE loss) just corresponds to the cosine softmax loss studied in [A]. In this case, the loss still has both “pull” and “push” gradient terms, as shown in the first line of Eq. (3) in paper [A]. \n\nAs a comparison, our dot-regression (DR) loss only has the “pull” gradient term and no longer relies on the “push” gradient. The motivation has been stated in the contribution [C2]. When the classifier is fixed as an ETF optimality, the “pull” gradient is always accurate towards the optimality, while “push” gradient does not necessarily direct to the optimality. The advantage of DR loss over the CE loss has been theoretically proved in Theorem 2 in our paper. \n\n----\nReferences \n\n[28] V. Papyan, X. Han, and D. L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40):24652–24663, 2020.\n\n[29] F. Pernici, M. Bruni, C. Baecchi, and A. Del Bimbo. Regular polytope networks. IEEE Transactions on Neural Networks and Learning Systems, 2021\n\n[A] P2SGrad: Refined Gradients for Optimizing Deep Face Models Xiao Zhang, Rui Zhao, Junjie Yan, Mengya Gao, Yu Qiao, Xiaogang Wang, Hongsheng Li; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019\n\n[B] Maximally Compact and Separated Features with Regular Polytope Networks Federico Pernici, Matteo Bruni, Claudio Baecchi, Alberto Del Bimbo; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 46-53\n\n\n\n\n", " + The difference with [47] and [29] should be better highlighted. What is the difference between an ETF simplex and the classic d-simplex regular polytope?\n\nThe objective of LPM studied in [47] is the CE loss with the regularizations of features and classifier. They prove that: (1) neural collapse is the global optimality of the objective in the LPM when training on a balanced dataset; (2) despite being nonconvex, the landscape of the objective in this case is benign, which means that there is no spurious local minimum, so gradient-based optimization method can easily escape from the strict saddle points to look for the global minimizer. \n\nIn contrast, the objective of LPM we consider is the CE loss with the constraints of features and classifier. We mainly consider neural collapse in the imbalanced training. Our contributions are summarized in [C1]-[C4], which are different from the two results in [47]. \n\n[29] shows that fixing a learnable classifier as the vertices of regular polytopes, including d-simplex, d-cube, and d-orthoplex, helps to learn stationary and maximally separated features. It does not harm the performance, and in many cases improves the performance. It also brings faster speed of convergence and reduces the model parameters. Their spirit of learning maximally separated features is very similar to neural collapse. Compared with [29], we prove that neural collapse can even be the global optimality of the CE loss in the imbalanced training using the LPM analytical tool. We also propose a new loss function with a provable advantage over the CE loss. \n\nIn neural collapse studies, the simplex ETF is a structure where $K$ vertices in $\\\\mathbb{R}^d$ have the same length and have the largest pair-wise equiangular separation. In [29], d-simplex is a generalization of the triangle or tetrahedron definition to the dimension in $\\\\mathbb{R}^d$, such as a triangle in $\\\\mathbb{R}^2$ and a tetrahedron in $\\\\mathbb{R}^3$. For ETF, when $d=K-1$, the ETF reduces to a regular simplex. For example, when $d=3$ and $K=4$, the ETF is a just a tetrahedron, which is the same as 3-simplex. But ETF allows the condition that there are a less number of vertices than a $d$-simplex, e.g., $d=3$ and $K=3$. As long as $d\\\\ge K-1$, we can have an ETF according to Eq. (1) in our paper. But a $d$-simplex only holds when $d=K-1$, which may limit the choice of dimension for a classification problem with a given number of classes. \n\n+ Highlight the difference with [5].\n\nThe objective of LPM studied in [5] is also the CE loss with feature and classifier constraints. They prove that (1) neural collapse is the global optimality of the objective when training on a balanced dataset; (2) in imbalanced training, neural collapse will be broken, and the prototypes of minor classes will be merged, which explains the difficulty of imbalanced training. \n\nWe also study the objective of CE loss with feature and classifier constraints. As a comparison, (1) We prove that neural collapse can also be the global optimality for imbalanced training as long as the classifier is fixed as an ETF (Theorem 1); (2) We analyze from the gradient perspective and show that the broken neural collapse in imbalanced training is caused by the imbalanced magnitude of gradients of the CE loss (Remark2). We also show that the “pull-push” mechanism is crucial for the emergence of neural collapse in the CE loss in balanced training (Remark 3); (3) Inspired by the analyses, we propose a new loss function with a provable advantage over the CE loss (Theorem 2). \n\nWe will make the comparison with [47, 29, 5] more detailed and highlighted in our revised version. \n\n----\nReferences \n\n[5] C. Fang, H. He, Q. Long, and W. J. Su. Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National Academy of Sciences, 118(43), 2021.\n\n[29] F. Pernici, M. Bruni, C. Baecchi, and A. Del Bimbo. Regular polytope networks. IEEE Transactions on Neural Networks and Learning Systems, 2021\n\n[47] Z. Zhu, T. DING, J. Zhou, X. Li, C. You, J. Sulam, and Q. Qu. A geometric analysis of neural collapse with unconstrained features. In NeurIPS, 2021.\n\n\n\n\n\n\n\n", " Dear reviewer, \n\nThanks for your valuable comments. \n\n+ The contributions of this paper. \n\nWe deeply thank you for appreciating our contribution of the gradient analysis on the \"pull-push\" mechanism. For your concern about the contributions, we would like to re-summarize our contributions. The following contributions are consistent with the ones claimed in our paper (lines 78 - 90), but highlight the overlooked points in more details. \n\n**[C1]** Neural collapse as an elegant phenomenon is observed by [28], and is proved (within the LPM) to be the global optimality of training on a balanced dataset under the CE loss [5, 8, 22, 15, 47] or MSE loss [24, 37], which theoretically explains why neural collapse happens in balanced training. As far as we know, we are the first to show that neural collapse can even happen in imbalanced training as long as the learnable classifier is fixed as an ETF (Theorem 1 and Remark 1 in our paper, proved in Appendix A). \n\n**[C2]** Our theoretical analyses on the gradient of CE loss indicate that: (1) neural collapse will be broken in imbalanced training with a learnable classifier, i.e., classier vectors of minor classes would be close or even merged, due to the imbalanced gradients with respect to the learnable classifier; in contrast, our fixed ETF classifier does not suffer from this dilemma (Remark 2 in our paper); (2) the emergence of neural collapse in balanced training is attributed to the “pull-push” mechanism in the CE loss (Remark 3 in our paper); (3) when the classifier is fixed as an ETF optimality, the \"pull\" gradient is always accurate, and the ``push’’ gradient is no longer necessary, which inspires us to develop a new loss function with more accurate gradient. \n\n**[C3]** Inspired by (3) in [C2], we further develop a new loss function with only \"pull\" gradient and the same optimality as the CE loss. It has a better convergence property than the CE loss, which is theoretically proved (Theorem 2 in our paper, proved in Appendix B).\n\n**[C4]** Experiments of long-tail classification on CIFAR-10, CIFAR-100, SVHN, STL, and ImageNet are conducted to verify our theories and theory-inspired methods in [C1]-[C3].\n\nWe believe that the contributions [C1]-[C4] are advancements for the neural collapse area. \n\n+ The current title more refers to the fixed classifier aspect, rather than to the imbalanced neural collapse training regime. \n\nYes, we really thank you for the suggestion on our title. As responded in the previous question, our contributions are mainly targeted at the neural collapse area. **The aim of our study is NOT to propose an algorithm for some application task, nor to study the fixed classifier itself.** We think that the contributions should be conveyed in the Introduction section (lines 78 - 90), instead of the title, so we did not make a title carefully. We will follow your suggestion and revise it accordingly. A title such as “neural collapse in imbalanced training” is more preferable. \n\n----\nReferences \n\n[5] C. Fang, H. He, Q. Long, and W. J. Su. Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National Academy of Sciences, 118(43), 2021.\n\n[8] F. Graf, C. Hofer, M. Niethammer, and R. Kwitt. Dissecting supervised constrastive learning. In ICML, pages 3821–3830. PMLR, 2021.\n\n[15] W. Ji, Y. Lu, Y. Zhang, Z. Deng, and W. J. Su. An unconstrained layer-peeled perspective on neural collapse. In ICLR, 2022.\n\n[22] J. Lu and S. Steinerberger. Neural collapse with cross-entropy loss. arXiv preprint arXiv:2012.08465, 2020.\n\n[24] D. G. Mixon, H. Parshall, and J. Pi. Neural collapse with unconstrained features. arXiv preprint, arXiv:2011.11619, 2020.\n\n[28] V. Papyan, X. Han, and D. L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40):24652–24663, 2020.\n\n[29] F. Pernici, M. Bruni, C. Baecchi, and A. Del Bimbo. Regular polytope networks. IEEE Transactions on Neural Networks and Learning Systems, 2021\n\n[37] T. Tirer and J. Bruna. Extended unconstrained features model for exploring deep neural collapse. ICML, 2022.\n\n[47] Z. Zhu, T. DING, J. Zhou, X. Li, C. You, J. Sulam, and Q. Qu. A geometric analysis of neural collapse with unconstrained features. In NeurIPS, 2021.\n\n\n\n\n", " + In LPM, how are the features produced? How do you associate\nthe features to be from the data x? \n\nNeural network is highly non-convex and approximations are always necessary to perform theoretical analysis. LPM just serves as such analytical tool. It drops the backbone network and features are independent learnable variables. We did not write $\\mathbf{h}_{\\theta}(x)$ because $\\\\mathbf{h}$ is not conditioned on $x$ in LPM. Although it cannot be used to extract feature for application, it has very similar learning behaviors to the general learning in a real neural network. It is widely adopted in neural collapse studies [5, 8, 22, 15, 47, 24, 37] to facilitate theoretical analysis. We compare LPM with the general learning in a real network as follow:\n\n|| Layer-peeled Model | Real Neural Network|\n|---|---|---|\n| Variables | $\\\\mathbf{H}$, $\\\\mathbf{W}$ | $\\\\mathbf{W}_{1:L-1}$, $\\mathbf{W}$\n| Gradient | $\\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial \\\\mathbf{H}}$, $\\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial \\\\mathbf{W}}$ | $\\\\frac{\\\\partial \\\\mathbf{H}}{\\\\partial \\\\mathbf{W}_{1:L-1}}\\\\cdot\\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial \\\\mathbf{H}}$, $\\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial \\\\mathbf{W}}$|\n\nwhere $\\\\mathbf{H}$ denotes the features, $\\\\mathbf{W}$ denotes the classifier, and $\\\\mathbf{W}_{1:L-1}$ denotes the parameters in the backbone network. \n\nSo, we only use LPM for theoretical analysis, but still need to train a real neural network with a backbone in experiments. \n\n\n+ The paper does not describe any limitations. \n\nWe did discuss the limitations of our work in Appendix F. \n\n----\nReferences \n\n[5] C. Fang, H. He, Q. Long, and W. J. Su. Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National Academy of Sciences, 118(43), 2021.\n\n[8] F. Graf, C. Hofer, M. Niethammer, and R. Kwitt. Dissecting supervised constrastive learning. In ICML, pages 3821–3830. PMLR, 2021.\n\n[15] W. Ji, Y. Lu, Y. Zhang, Z. Deng, and W. J. Su. An unconstrained layer-peeled perspective on neural collapse. In ICLR, 2022.\n\n[22] J. Lu and S. Steinerberger. Neural collapse with cross-entropy loss. arXiv preprint arXiv:2012.08465, 2020.\n\n[24] D. G. Mixon, H. Parshall, and J. Pi. Neural collapse with unconstrained features. arXiv preprint, arXiv:2011.11619, 2020.\n\n[37] T. Tirer and J. Bruna. Extended unconstrained features model for exploring deep neural collapse. ICML, 2022.\n\n[47] Z. Zhu, T. DING, J. Zhou, X. Li, C. You, J. Sulam, and Q. Qu. A geometric analysis of neural collapse with unconstrained features. In NeurIPS, 2021.\n", " + There are alternative methods for dealing with the problem of imbalanced datasets. The paper appears to have not considered or compared to such alternatives. \n\nIndeed, there have been a lot of methods for long-tailed classification, including re-sampling methods, re-weighting methods, and different kinds of loss function variants, such as focal loss. However, our work mainly focuses on neural collapse in imbalanced training. It is a neural collapse study whose contributions are summarized in [C1]-[C4]. We do not aim to achieve or push the state-of-the-art performance, so did no consider the advanced methods that have strong performances in long-tail classification. Some of these advanced methods are designed empirically and lack rigorous support. In contrast, the theoretical results of our work, including the neural collapse optimality in imbalanced training and the advantage of our dot-regression loss, are provable (Appendices A and B). We believe that interpretability is also an important factor that should be recognized. \n\nIn our experiments, we have compared our method with the CE loss weighted by class distribution, denoted as “Learnable Classifier + CE$^*$” in Table 1. We additionally compare our method with focal loss ($\\\\alpha=0.25, \\\\gamma=2$) and class-balanced (CB) loss [48] here. The results of long-tailed classification on CIFAR-10 with Mixup and different imbalance ratios are shown as follow:\n\n| Imbalance ratio | 0.005 | 0.01 | 0.02 |\n| --- | --- | --- | --- |\n|Learnable Classifier + Weighted CE Loss | 68.5$\\\\pm$0.3 | 73.9$\\\\pm$0.3 |79.3$\\\\pm$0.2|\n|Learnable Classifier + Focal Loss | 69.9$\\\\pm$0.2 | 75.4$\\\\pm$0.3 | 78.9$\\\\pm$0.2 |\n|Learnable Classifier + CB Loss [48] | 69.3$\\\\pm$0.4 | 76.0$\\\\pm$0.2 | 79.5$\\\\pm$0.4 | \n|ETF Classifier + DR Loss | 71.9$\\\\pm$0.3 | 76.5$\\\\pm$0.3 | 81.0$\\\\pm$0.2 |\n\nWe observe that our method still performs better than the methods that use re-weighting strategies. \n\n\n\n+ Experiments in settings when there are large number of classes as in ImageNet can make the analysis significantly better. If in such settings a cross entropy loss provides better push and pull gradients than the proposed linear alternative Eq. (14). \n\nActually, we did conduct experiments on ImageNet and showed the results in Table 3. ImageNet-LT dataset is the standard long-tailed version of ImageNet and is widely used in long-tailed classification studies. It has the same number of classes as the original ImageNet. \n\nAs shown in Table 3, the superiority of our method is more remarkable when training for less epochs. It can be explained by the fact that our method directly has the classifier in its optimality and optimizes the features towards the neural collapse solution, while the learnable classifier with the CE loss needs a sufficient training process to separate classifier vectors of different classes. So, our method can be preferred when fast convergence or limited training time is required. The accuracy curves in training for the results in Table 3 are shown in Figure 6 in Appendix E. \n\nNote that our dot-regression loss in Eq. (14) is quadratic but not linear. \nAs shown in Remark 3 in our paper, for a learnable classifier, the \"pull-push\" mechanism of the CE loss makes features of the same class contracted and features of different classes separated. But when the classifier has been fixed as an ETF, we only need the \"pull\" gradient because it is always accurate towards the optimality. So, we develop our dot-regression (DR) loss. As shown in Theorem 2, DR loss theoretically has a better convergence. Note that the proof of Theorem 2 is agnostic of the number of classes. So, no matter how many classes, our method enjoys the benefit. It also gets verified in Table 3 by experiments on ImageNet-LT that has a large number of classes.\n\n----\nReferences \n\n[48] Cui et al., Class-Balanced Loss Based on Effective Number of Samples, CVPR 2019.\n\n\n\n\n", " + In the context of learning in general, it is obviously not needed to learn a linear classifier at the end of DNN. \n\nNote that the \"linear classifier\" is not the encoding of labels. It refers to the linear mapping from the high-dimension feature (output from a backbone network) in $\\\\mathbb{R}^d$ to the label encoding $\\\\mathbb{R}^C$, where $C$ is the number of classes. So, it is a matrix in $\\\\mathbb{R}^{d\\\\times C}$. \n\n\nIn the general learning, such as training with the on-hot encoding and cross entropy (CE) loss, why do you claim a linear classifier is obviously not needed to learn? A learnable classifier is necessary to induce enough margin among prototypes (classifier vectors) of different classes. If a classifier is randomly fixed without learning, its prototypes may be close, so the classifier will output similar logits and be unable to classify a feature. We claim that a learnable classifier is not needed only when we fix it as an ETF, which corresponds to the largest pair-wise separation among prototypes. As described by neural collapse, it is the optimality of a learnable classifier in balanced training. Our paper shows that fixing the classifier as an ETF actually brings benefit, i.e., neural collapse can inherently happen even in imbalanced training. \n\n+ When using an LPM model, we may get rid of learning the classifier weights. In an end-to-end learning, does not one still need to learn a method that maps features into the vertices of the ETF?\n\nLPM is an analytical framework where the backbone network is dropped and features are independent learnable variables. It shares similar learning behaviors to the general learning with a backbone, but facilitates theoretical analysis. A detailed comparison between LPM and general learning will be offered in our response to your later question (other comment). Prior studies on neural collapse using LPM also have learnable classifiers. We prove the benefit of fixing the classifier as an ETF, i.e., neural collapse can inherently happen even in imbalanced training (Theorem 1). \n\nIn the general end-to-end learning with a backbone, we can still fix the classifier as an ETF and do not learn it. LPM is used just for theoretical analysis, as adopted in most neural collapse studies. But in practical implementation, a backbone is still needed to have features dependent on the input samples. As described in Lines 151 – 155 in our paper, the difference is that LPM uses the gradient $\\\\frac{\\\\partial\\\\mathcal{L}}{\\\\partial\\\\mathbf{h}}$ to update the variables $\\mathbf{h}$, while practical general learning uses the gradient $\\\\frac{\\\\partial\\\\mathcal{L}}{\\\\partial\\\\mathbf{H}}$ to update the backbone parameters by multiplying with the Jacobian matrix, $\\\\frac{\\\\partial \\\\mathbf{H}}{\\\\partial \\\\mathbf{W}_{1:L-1}}\\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial \\\\mathbf{H}}$, where $\\mathbf{H}$ is the collection of features and $\\\\mathbf{W}_\\{1:L-1\\}$ denotes the backbone parameters. \n\nOur experimental results with “ETF classifier” all have a learnable backbone and a fixed classifier as ETF in an end-to-end training manner. \n\n\n+ Why not consider other structures if we do not assume any CE loss? Why not a regression into the columns of an arbitrary orthogonal matrix, and there is no need to use a softmax? \n\nYes, they are also feasible. Note that ETF structure has the largest pair-wise separation of $K$ prototypes in $\\\\mathbb{R}^d$, i.e., $\\forall i\\ne j, w_i^Tw_j=-\\frac{1}{K-1}$ (an obtuse angle), while in an orthogonal matrix, the inner product $w_i^Tw_j=0$. Besides, the ETF structure also corresponds to the optimal Fisher discriminant ratio, with the within-class variance minimized and the between-class variance maximized. Although using an orthogonal matrix as the classifier is also feasible and has been adopted in some applications, our theoretical work is conducted in the context of neural collapse, which is mainly concerned with the ETF structure as the optimal feature-classifier alignment. So, we did not consider other structures such as orthogonal matrix. \n\n\nIf we use an ETF or an orthogonal matrix as the fixed classifier, we can indeed regress the feature into its corresponding prototype. It is similar to what we do in Section 4.3 (dot-regression loss) in our paper. We regress the product of feature and the prototype, instead of the feature itself. That is because if we directly regress the feature as the loss function, its gradient term of prototype will not attenuate as the optimization approaches to the optimality, like the coefficient of $(1-\\\\cos\\\\angle(\\\\mathbf{h},\\\\mathbf{w}_c^*))$ in Eq. (15) and $(1-p_c(\\\\mathbf{h}))$ in Eq. (12). But they have the same optimality, i.e. all features are pulled into their corresponding fixed prototypes. In our dot-regression loss proposed in Section 4,3, there is indeed no use of a softmax function. \n\n\n\n\n\n", " Dear Reviewer, \n\nThanks for your valuable comments. \n\nWe think that you may misunderstand our work and overlook some important contributions. So, we would like to first re-summarize our contributions, and then respond to your questions carefully. The following contributions are consistent with the ones claimed in our paper (lines 78 - 90), but highlight the overlooked points in more details. \n\n**[C1]** Neural collapse as an elegant phenomenon is observed by [28], and is proved (within the LPM) to be the global optimality of training on a balanced dataset under the CE loss [5, 8, 22, 15, 47] or MSE loss [24, 37], which theoretically explains why neural collapse happens in balanced training. As far as we know, we are the first to show that neural collapse can even happen in imbalanced training as long as the learnable classifier is fixed as an ETF (Theorem 1 and Remark 1 in our paper, proved in Appendix A). \n\n**[C2]** Our theoretical analyses on the gradient of CE loss indicate that: (1) neural collapse will be broken in imbalanced training with a learnable classifier, i.e., classier vectors of minor classes would be close or even merged, due to the imbalanced gradients with respect to the learnable classifier; in contrast, our fixed ETF classifier does not suffer from this dilemma (Remark 2 in our paper); (2) the emergence of neural collapse in balanced training is attributed to the “pull-push” mechanism in the CE loss (Remark 3 in our paper); (3) when the classifier is fixed as an ETF optimality, the \"pull\" gradient is always accurate, and the \"push\" gradient is no longer necessary, which inspires us to develop a new loss function with more accurate gradient. \n\n**[C3]** Inspired by (3) in [C2], we further develop a new loss function with only ``pull’’ gradient and the same optimality as the CE loss. It has a better convergence property than the CE loss, which is theoretically proved (Theorem 2 in our paper, proved in Appendix B).\n\n**[C4]** Experiments of long-tail classification on CIFAR-10, CIFAR-100, SVHN, STL, and ImageNet are conducted to verify our theories and theory-inspired methods in [C1]-[C3].\n\nWe think our work should not be judged only on what method we use and what performance we achieve. **Our theoretical results including Theorem 1, Remark 1, Remark 2, Remark 3, and Theorem 2, are original, and should not be overlooked.** We believe that the contributions, in particular [C1]-[C3], are advancements for the neural collapse area because: \n\n(1) Current neural collapse studies only focus on why neural collapse happen in balanced training [5, 8, 22, 15, 47, 24, 37], while we are the first to show that neural collapse can also be a global optimality in imbalanced training; \n\n(2) Compared with current neural collapse studies, we not only show the neural collapse global optimality, but also inspire a new loss function whose benefit is provable; \n\n(3) There are only empirical experiments in these studies showing the convergence to neural collapse, while we additionally show practical applicability of our neural collapse inspired methods by long-tailed experiments on multiple datasets including ImageNet. \n\n**Based on the statement above, we hope that our contributions can be properly recognized.** \n\n----\nReferences \n\n[5] C. Fang, H. He, Q. Long, and W. J. Su. Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National Academy of Sciences, 118(43), 2021.\n\n[8] F. Graf, C. Hofer, M. Niethammer, and R. Kwitt. Dissecting supervised constrastive learning. In ICML, pages 3821–3830. PMLR, 2021.\n\n[15] W. Ji, Y. Lu, Y. Zhang, Z. Deng, and W. J. Su. An unconstrained layer-peeled perspective on neural collapse. In ICLR, 2022.\n\n[22] J. Lu and S. Steinerberger. Neural collapse with cross-entropy loss. arXiv preprint arXiv:2012.08465, 2020.\n\n[24] D. G. Mixon, H. Parshall, and J. Pi. Neural collapse with unconstrained features. arXiv preprint, arXiv:2011.11619, 2020.\n\n[28] V. Papyan, X. Han, and D. L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40):24652–24663, 2020.\n\n[37] T. Tirer and J. Bruna. Extended unconstrained features model for exploring deep neural collapse. ICML, 2022.\n\n[47] Z. Zhu, T. DING, J. Zhou, X. Li, C. You, J. Sulam, and Q. Qu. A geometric analysis of neural collapse with unconstrained features. In NeurIPS, 2021.\n\n", " On balanced datasets and when using cross-entropy loss for learning, it is observed that [28] the last layer features from the same class converge to their within-class means and the weights the network in the last layer collapses to the vertices of an equiangular tight frame (ETF). Thus, a natural question to ask is: if this empirical observation can be used to train better deep learning models? The current paper uses this insight to design a network where instead of training for a cross-entropy loss, a loss that regresses the deep features against a randomly sampled set of vectors (corresponding to the vertices of an ETF) is used for classification. In situations when the data is imbalanced, the paper claims that this prior knowledge of ETF allows better learning of the model. Experiments are provided on small scale datasets (such as CIFAR, STL, etc.) and demonstrate promise of the method in imbalanced settings. I think the insights underlying the paper are encouraging and the application of the ETF model for imbalanced datasets is reasonable. The experiments show improvements on a variety of datasets under imbalanced settings. \n\nWhile, the insights are interesting, I do not think the contributions are compelling for the following reasons.\n\n1. The key question the paper asks is L58: \"Do we really need to learn a linear classifier at the end of a deep neural network for classification?\". While, in the context of ETF, it is an interesting question, however when thinking of this question in the context of learning in general, it is obviously not needed to learn a linear classifier at the end of a DNN. The one-hot encoding typically used on labels in cross-entropy loss based training is one way of encoding with its own geometric meaning, while the one proposed using ETF in the paper is yet another encoding. In that sense, I do not really see a significant contribution the paper brings in. Of course, when using an LPM model, we may get rid of learning of the classifier weights, but when using the method in an end-to-end learning framework, doesn't one still need to learn a method that maps the features into one of the vertices of the ETF? \n\n2. While ETF is a structure that evolves from the use of CE loss, why not consider potentially other structures if we do not assume any CE loss? Why not a regression into the columns of an arbitrary orthogonal matrix (assuming there is no need to interpret the classification as probabilities any more, and thus there is no need to use a softmax)? \n\n3. There are standard approaches for dealing with the problem of imbalanced datasets, such as choosing weights during training, focal loss, etc. The paper appears to have not considered or compared the proposed approach to any of such alternatives and thus limits the understanding of the benefits to a narrow scope (e.g., Figure 3, Tables 2,3,4, etc.).\n\n4. More experiments, for example in settings when there are very large number of classes as in ImageNet, can make the analysis significantly better. I wonder if in such settings a cross-entropy loss provides better \"push\" and \"pull\" gradients (due to the log term) than the proposed linear alternative (14).\n\nOther comments:\n1. For the layer-peeled model in (4), it is not clear how precisely are the features produced? If you optimize over W and H, then how do you associate the features to be from the data x? When optimizing for h, are you optimizing the parameters of the underlying network? If so, rewriting h to h_\\theta(x) and optimizing over \\theta would make the math better. If it is being optimized only on W and H, then how is H conditioned on the data X, and how do you prevent H from being arbitrary? Please see above. The paper does not describe any limitations as such. However, I believe experiments on larger datasets such as ImageNet could bring out the limitations of the approach better. ", " The paper studies neural collapse in the case in which the final classifier is not subject to back-propagation and is initialized with the vertices of a simplex equiangular tight frame (ETF). In particular the paper studies the unbalanced learning regime and shows that the neural collapse properties hold also in this case. The proof is based on the layered-peeled model assumption.\n STRENGTHS:\n\n1) The paper is very interesting as it provides a theoretical analysis of the class imbalance problem with a fixed classifier inspired by the recent neural collapse phenomenon.\n\n2) The push-pull mechanism from the gradient analysis is novel and it is a clear contribution of the paper.\n\n3) The idea may also have practical implications related to some recent works about learning compatible features representations [C, D]. Compatible features are learned so that they can be directly compared even if they are learned from different time instants or from different network architectures. The method [D] specifically takes advantage of a fixed classifier exactly defined as the one proposed in [47] and in this submission.\n\nWEAKNESSES:\n\n1) The paper contributions should be improved. It seems that the main contribution of the paper is about the combination of the following three aspects: neural collapse, unbalanced data and fixed classifiers. The title does not convey a direct relationship with these three aspects. In its current shape it seems more referring to the fixed classifier aspect, like in [29] and [12], rather than to the imbalanced Neural Collapse training regime. This is mostly due to the fact that if a learnable classifier is not “really needed” it is supposed that a fixed one would be preferred. In the reviewer's opinion the title should be improved reflecting more the main content of the paper.\nIn this respect, the differences with [47] and [29] should be better highlighted. In particular a question that should be answered is: what is the difference between an ETF simplex and the classic d-simplex regular polytope? The question is valid either in the fixed case and in the learnable one. Finally, as the paper [5] is very related with the imbalance aspect of the paper, it should be discussed in the related work section and possibly differences highlighted.\n\n2) It seems that in many papers (including the seminal one [28] and [47]) the relationship between the number of classes K and the dimension d required to observe the neural Collapse phenomenon is d≥K while in the submitted manuscript and in [47] d≥K-1. Is there any specific motivation for these two cases?\n\n3) Some important references are missing: [A,B]. Although not directly related to neural collapse, the paper [A] proposes a similar gradient analysis of the final classifier. A brief discussion highlighting differences and contributions should be given. The paper [B] seems to be one of the first published papers on maximal separation and fixed classifiers with the simplex geometry.\n\n4) Is the random ETF initialization needed to avoid possible biases in its choice? Is there any evaluation quantifying this dependency?\n\n5) Eq. 15 is referring to the optimization of the angle between features and classifier prototypes. What is the difference between that loss and the classic entropy loss in which both features and prototypes are normalized? Both minimize the angles between features and prototypes.\n\nReferences\n\n[A] P2SGrad: Refined Gradients for Optimizing Deep Face Models Xiao Zhang, Rui Zhao, Junjie Yan, Mengya Gao, Yu Qiao, Xiaogang Wang, Hongsheng Li; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019\n\n[B] Maximally Compact and Separated Features with Regular Polytope Networks Federico Pernici, Matteo Bruni, Claudio Baecchi, Alberto Del Bimbo; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 46-53\n\n[C] Shen Y, Xiong Y, Xia W, Soatto S. Towards backward-compatible representation learning. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020 (pp. 6368-6377).\n\n[D] Biondi N, Pernici F, Bruni M, Del Bimbo A. CoReS: Compatible Representations via Stationarity. arXiv preprint arXiv:2111.07632. 2021 Nov 15.\n See above. No noteworthy limitations or potential negative societal impacts. ", " The paper proposes a loss function which projects the features to a pre-defined simplex and computes the gradients w.r.t it. It claims that neural networks use linear classifiers at the end and this is not needed if features are computed for each class and projected to a simplex. A theoretical explanation for this approach is provided and experimental results are presented. The idea to try to solve classification using a simplex is interesting but I believe there are several papers in the deep learning community for large-scale classification problems (like face/few-shot recognition) which have evaluated similar loss functions which have not been cited or compared with in this paper. I am mentioning a few of these papers and they use similar kinds of projection functions but also present more variants of the version presented in this paper. For example, Wen et. al. (ECCV 2016) and Snell et. al. Neurips 2017, use class means/prototypes as a loss function (called center/prototype loss) to improve recognition performance. The class means are then constrained to a sphere by Liu et. al in SphereFace (CVPR 2017) which further improves the performance (like the ETF proposed in this paper). Deng et al. in ArcFace (CVPR 2019) further add an angular penalty between points on a sphere. There have been several papers after this (look for papers citing these 4) which explore ways to do classification without a linear layer followed by softmax+CE which are based on per-class features and their margins.\n\n[1] A Discriminative Feature Learning Approach for Deep Face Recognition, Wen et. al, ECCV 2016\n\n[2] Prototypical networks for few-shot learning, Snell et. al, Neurips 2017\n\n[3] SphereFace: Deep Hypersphere Embedding for Face Recognition, Liu et. al, CVPR 2017\n\n[4] ArcFace: Additive Angular Margin Loss for Deep Face Recognition, Deng et. al, CVPR 2019 Line 23:\nThe paper starts by citing a reference [28] where a phenomenon called neural collapse is mentioned. For example, it is not clear why would features collapse to within-class mean? What is the loss function being used to do so? Are class labels even used during training which leads to this equiangular tight frame? The reader needs to be given a high level/intuitive introduction to ETF at this stage of the paper even though these details may be described in [28]. It could be like, once a neural net is trained, [28] observes 4 properties NC 1,2,3,4, which are the properties for an ETF and we can describe what is NC 1/2/3/4.\n\nPlease clarify how the work is different from the papers which I mentioned or if it would be better compared to these in a given scenario. yes, it has been addressed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "1XxQSIEte5p", "uhix49Pfxw", "pA8z1FvEan", "lR-aSKBmFrB", "HlkpLqCDpcQ", "qaIa9S9UJgP", "1Sg0enDxdOL", "VoAj80bKMX", "1I2TB-00DH", "ZxW__SMyDm7", "ZxW__SMyDm7", "c7dDLS0U5n", "w5cpEtCbfT", "2waE7v9a4Mw", "skwJcr04771", "HIzVMfDQhM", "gNadXxrwkng", "UU7g_LQ9CN3", "HfMzT3AjeHt", "In0NLdUnayg", "nips_2022_A6EmxI3_Xc", "nips_2022_A6EmxI3_Xc", "nips_2022_A6EmxI3_Xc" ]
nips_2022_4R7YrAGhnve
SegViT: Semantic Segmentation with Plain Vision Transformers
We explore the capability of plain Vision Transformers (ViTs) for semantic segmentation and propose the SegViT. Previous ViT-based segmentation networks usually learn a pixel-level representation from the output of the ViT. Differently, we make use of the fundamental component—attention mechanism, to generate masks for semantic segmentation. Specifically, we propose the Attention-to-Mask (ATM) module, in which the similarity maps between a set of learnable class tokens and the spatial feature maps are transferred to the segmentation masks. Experiments show that our proposed SegViT using the ATM module outperforms its counterparts using the plain ViT backbone on the ADE20K dataset and achieves new state-of-the-art performance on COCO-Stuff-10K and PASCAL-Context datasets. Furthermore, to reduce the computational cost of the ViT backbone, we propose query-based down-sampling (QD) and query-based up-sampling (QU) to build a Shrunk structure. With our Shrunk structure, the model can save up to 40% computations while maintaining competitive performance.
Accept
This submission has received comments from 4 official reviewers. The authors have made very detailed replies to the reviewer's comments. The authors and reviewers had quite rich discussions. After these discussions, 3 reviewers recommended weak acceptance, and 1 recommended rejection. For the novelty concerns, the authors clarify them during the rebuttal. The reviewers have also recommended comparing with recent semantic segmentation methods using ViTs. Missing comparisons should be included in the final version, including comparisons with [1] Ma, Xuezhe, et al. "Luna: Linear unified nested attention." NeurIPS 2021. [2] Ryoo, Michael, et al. "Tokenlearner: Adaptive space-time tokenization for videos." NeurIPS 2021. [3] Wu, Yu-Huan, et al. "P2T: Pyramid Pooling Transformer for Scene Understanding", IEEE TPAMI, 2022. Only reviewer Eyo8 recommends borderline rejection. The authors have made quite a detailed rebuttal but we have not heard from the reviewer after the rebuttal. Thus, the AC would like to recommend acceptance.
train
[ "I6nVqXxqxUB", "QvgxcJ28gSA", "PQGK_gchOwX", "IV_a0jjqaF", "YLLcYI3lXE_", "LULpVItfJA", "8VKrIqiMdrL", "rq7vAnYXwTm", "e3Nm3Cna_Za", "RozqhOBRSVZ", "706DyYC1nDI", "9rJIby3AtR-", "chxcAVxKNhp", "UtxgVeF5OWb", "wb7UM6E56K0", "GNPF-8kiwGv", "PXdCd0hmnqE" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **About QU/QD**\n\n*We thank YDPR for the great advice, we will add those cites to the paper.*\n\n**About Table 4 and Line 234-239**\n*In Maskformer, they first the queries to have multiple transformer decoder calculation with the high-level feature maps from the backbone. The results are 100 queries. Then they use two FC layers to generate the classification logits and the 1x1 kernels to have 1x1 convolutions with the low-level features. Thus for maskformer to process, it requires a high-level feature map and a low-level feature map. (They can use purely on high-level feature map only, however, the hierarchical backbone has 1/32 resolution backbones which is too small for segmentation, and with multiple steps of downsampling, the detailed information is lost.) \nOur ATM managed to make use of the attention which is a by-production along the calculation as the source to generate the mask, which enables the mask loss to be carried on any single feature map. This is a simple implementation, however, we proved its effectiveness and find a good application to use it (since on hierarchical backbones the single stage feature map information is not rich enough, ATM is not very effective).\nFor Table 4, the conclusion can only draw that compared with setr, we enabled the use of mask loss via the ATM module and boost the performance by 3.1\\%. That is the effect of this module only. \nWith this module, we proposed the cascaded structure and the shrink structure, which, without atm is not possible.*\n", " Thanks for the reviewer's reply!\nThe difference with Maskformer is not our novelty. The motivations of these two papers are totally different.\n\nThe MaskFormer aims at proposing a unified network structure that can handle all segmentation tasks together. The main contribution was the decoupled mask and segmentation loss. We respect and appreciate their contributions. Our main goal is to explore a simple and efficient network structure based on plain backbone ViT. Imagine if researchers in another field now want to introduce a semantically segmented head on the structure of a ViT, they may not have the same granular knowledge of the details of two papers (MaskFormer and Ours) as you do. Without our work, their direct transfer MaskFormer would have degraded performance and slow convergence.\n\nRegarding your detailed concern, we would like to provide more information as following:\n\n**First, the rebuttal says \"MaskFormer depends on Hungarian matching and each learnable queries corresponds to the spatial information rather than category information\", but actually MaskFormer mentions fixed matching in Sec 3.2.**\n\n*The fixed matching in Sec 3.2. in MaskFormer generates the learnable tokens from the backbone feature and generates the mask from the FPN features. They found that the semantic segmentation branch is possible to gain a 0.5\\% performance boost due to Hungarian matching. However, we found when we learn the masks from the attention map and generate the class tokens within one attention block, removing Hungarian matching will not lead to a performance drop.*\n\n**Second, the rebuttal claims \"ATM module do not need to add positional embedding like MaskFormer\". But to my knowledge, MaskFormer requires positional embedding for instance segmentation, otherwise the learned mask embeddings cannot distinct instances of the same class but in different position. However, for semantic segmentation, it should not be necessary.**\n*Yes. We agree with your opinion. That is why we removed the positional embedding. This is the difference, not the novelty over Maskformer.*\n\n**Third, the rebuttal introduces some advantages of plain ViTs, e.g. linking the text with the images (CLIP), unsupervised learning with larger scale datasets and so on. However, to my knowledge there is no evidence that hierarchical ViTs have any drawbacks on those tasks.**\n\n*Hierarchical ViTs may or may not have drawbacks. However, it requires a lot of computational resources to re-training the hierarchical ViTs on those large datasets, following DINO and CLIP. Our research enables the original ViT to do semantic segmentation efficiently. This will decouple the pre-training design from the fine-tuning demands, maintaining the independence of upstream vs. downstream tasks. Moreover, the original ViT does not have local convolutional layers, which enable the application of tricks from NLP, such as prompt tuning.*\n\n**About transferring MaskFormer and Mask2Former to plain ViTs.**\n\n*Thanks for your encouragement! We will add this result in our revision and hope we will have the chance to share these findings with the research community. We made fair implementations of maskformer to Vit backbones and used both our set of hyper-parameters and mask2former's own hyper-parameters. The possible reasons are in details but not trivial:*\n\n**1. The FPN in MaskFormer and the Cascade structure in our SegVit.** \n\n*For ViT backbones, the most semantic information-rich feature maps are the last layer. The intention of the maskformer structure in terms of semantic segmentation is to make good use of the multi-level information. From high-level small resolution to gather category/locational information and from low-level merged features to get mask details. However, this is not necessary for ViT backbones. We did experiments with no additional encoders, no fpn 3x3 convs, and fixed matching which is now very close to our ATM structure, the performance is (50.6 vs 51.2(ours)) *\n\n**2. Hungarain matching and our fixed class tokens.** \n\n*Another reason is the Hungarain matching involved. We found that with the involve of the matching, the converge speed is dramatically slowed on ViT backbones.*\n\nThis implies that the shrift to ViT backbone from maskformer is a non-trivial task. \n\n\n\n\n", " We thank the reviewer for the support!\n\nRegarding the novelty, please refer to the feedback from reviewer Nzhh. Our initial goal is to develop a simple and efficient structure for existing plain ViT backbones. The ATM model decoupled the mask and the class prediction on one transformer block. The cascade structure merges multi-level features with the help of attention machines. The Shrunk structure proposed a method that can reduce half of the computational cost in the backbone without damaging the pre-trained weight. All these contributions are developed purely based on ViT structure. \n\nThe plain backbone also has potential in many areas. It is suitable to merge with multimodality data, which a heuristical backbone can not do. If this direction is successful, it will enable the use of original ViT backbones for semantic segmentation; this will decouple the pre-training design from the fine-tuning demands, maintaining the independence of upstream vs. downstream tasks. \n\nThanks again.\n\n", " Thanks for the responses. I hope some of these comparisons could make it into a future version. I'm generally still quite okay with this submission living at the borderline. My opinion would be closer to accept than reject, but as others reviewers have mentioned, there is something left to be desired with respect to novelty.", " Dear all,\n\nThe novelty of this work is **not limited** to the design of the ATM. It explores a powerful and efficient decoder structure based on the ViT backbone. ATM is an implementation of decoupling the mask and the class to enable the new loss. It is simpler and more straightforward than the design in Maskformer (also more powerful on top of the ViT backbone). \n\nSegVit is an elegant, concise and efficient solution, resulting from a design that is more suitable for the transformer structure. \nAlthough we all know and admit that \n\n*1. The fc layer cannot decouple mask and classification, leading to low performance.*\n\n*2. The design of the MaskFormer is very complex (with Hungarian matching), the calculation is heavy (two different branches to decouple), and the effect is not good on ViT (4% decreased in mIoU with two times the computational cost).*\n\nDo we still insist on rejecting this solution because of the similarity of the implementation for one small block?\n\nConsider if now we want to use CLIP or DINO, a ViT-based backbone that is pre-trained on the large-scale dataset, the SegViT will be a good choice. The class tokens can be replaced with language embeddings. It will converge faster and generalize better. Moreover, the Shrunk structure can reduce the computational cost in these pre-trained models without damaging the pre-trained knowledge. Previous work based on transformers can not achieve this! They all require re-training.\n\nWe sincerely hope that the reviewers can reconsider the simplicity and advantages of our work The exploration of Vit backbone is our innovation.\n\nBest wishes,\n\nAuthors\n", " Thanks for the detailed rebuttal. It addressed partial of my concern.\n\n I still have concern on the novelty of ATM (also pointed by reviewer Eyo8). Although the rebuttal provides the result of per-pixel output (drop by 0.6%), I really fail to get the essential difference between the proposed ATM and the standard classifer (fc layer). Also, the reviewer Eyo8 points out the similarity of ATM with MaskFormer. So I keep my rating as borderline reject.", " \nThanks for the very detailed rebuttal. But I still have a few concerns on the novelty. Here are some comments:\n\n### About the differences to MaskFormer. I am not agree with the feedback:\n\n* First, the rebuttal says \"MaskFormer depends on Hungarian matching and each learnable queries corresponds to the spatial information rather than category information\", but actually MaskFormer mentions fixed matching in Sec 3.2.\n\n* Second, the rebuttal claims \"ATM module do not need to add positional embedding like MaskFormer\". But to my knowledge, MaskFormer requires positional embedding for instance segmentation, otherwise the learned mask embeddings cannot distinct instances of the same class but in different position. However, for semantic segmentation, it should not be necessary. \n\n* Third, the rebuttal introduces some advantages of plain ViTs, e.g. linking the text with the images (CLIP), unsupervised learning with larger scale datasets and so on. However, to my knowledge there is no evidence that hierarchical ViTs have any drawbacks on those tasks. \n\nAs a conclusion, I suggest the authors may consider carefully on justifying the novelty over MaskFormer and Mask2Former in the revised version, no matter whether the paper is accepted immediately. \n\n### About transferring MaskFormer and Mask2Former to plain ViTs\n\nThe experiment is very interesting. I recommend the authors add the results into the manuscript with proper explanation. I am not very clear why those methods cannot generalize well on plain ViTs. Is there any insights behind the implementation?\n\n### About QU/QD\n\nI agree with the justification on PVT. It is also recommended to cite [*1, *2], as those works also involve downsample/upsample with queries. \n\n[*1] Ma, Xuezhe, et al. \"Luna: Linear unified nested attention.\" NeurIPS 2021.\n\n[*2] Ryoo, Michael, et al. \"Tokenlearner: Adaptive space-time tokenization for videos.\" NeurIPS 2021. \n\n### About Table 4 and Line 234-239\n\nI am confused with the rebuttal. To my understanding, MaskFormer does not require per-pixel classification either (for each mask embedding, MaskFormer only generates a binary mask, which is very similar to this paper). So, my concern on Table 4 still exists: it seems the most performance gain comes from $\\mathcal{L}_{mask}$, however, which I think cannot be regarded as the **unique** contribution of this paper. \n\n\n", " Dear all reviewers,\n\nDo our responses answer your questions? Please let us know if you have any more questions. Thank you for your time.\n\nBest wishes,\nAll the authors", " We thank the reviewer for the time and effort. We address the concerns\nproposed by Reviewer Nzhh in detail\n\n**1) About the Attention-to-Mask module (ATM), it is implemented in cross-attention manner. But in fact, there is little difference with the standard classifier (a fc layer to map features to probability) for per-pixel classification in the normal semantic segmentation framework. Each learned token could be viewed as a classifier layer to map the pixel-level features into a probability with a Sigmoid function. In this sense, the ATM is similar to a standard classification layer.**\n\n*A: If only considering this layer, by simply replacing the attention mask with a per-pixel output, the performance will drop by 0.6\\%. Moreover, we do not claim the ATM module for our sole contribution in this paper. We use ATM to propose a structure that is capable of solving semantic segmentation tasks with a lightweight structure and high performance, which is non-trivial. Also, we dig into the topic of how to make use of the existing pre-trained weights while greatly reducing the computational cost. The Shrink structure we proposed is capable of reducing 40\\% computational cost without damaging the pre-trained weight.*\n\n**2) About the Shrunk structure, I am confused about the query-based downsampling operation (QD). In line 172, it says to use the nearest sampling to reduce the token numbers. In this sense, it has nothing with the query based downsampling and is simply a standard downsampling operation.**\n\n*A: For the QD, our approach is to reduce the size of the query tokens while keeping the resolution of the keys and values.\nWhen passing through a transformer layer, the output is the weighted sum of the 'value (feature map)' according to the attention map between 'query' tokens and the 'key' tokens. This is non-linear downsampling which will pay more attention to the important regions. While a standard downsampling operation will reduce the spatial size of the 'feature map' by a convolution layer with stride = 2, this is a grid sampling that reduces the feature map regularly. We want to directly use the existing pre-trained weights of plain backbones to enable the extension to other tasks. In table 7 we demonstrated how QD compares to the simple nearest downsample over qkv (q vs qkv: 52.6 vs 53.9)*\n\n**3) I am also confused about the implementation details on query-based upsampling operation (QU). It says to use a standard transformer decoder structure to upsample features. Is there any special design on the transformer decoder by incorporating the spatial information? More details are required on the decoder design.**\n\n*A: The first QU is made with random initialized 1/16 resolution tokens as queries and the 1/16 resolution layer outputs of ViT as keys and values. This QU records the information before the QD is applied to the ViT backbone. The structure is a typical transformer decoder, with a self-attention layer and a cross-attention layer. The second QU is the previous QU's output as queries and the downsampled ViT output as keys and values. Even though the keys and values have a smaller resolution, the output of a transformer decoder is dependent on the queries. Thus the output of the QU is still 1/16 in resolution.*\n\n**4) About the two QU operations in the version \\(c\\) of Figure 3, the downsampling ratio of lower QU is 1/16, and it is natural to think its output have smaller downsampling ratio like 1/8. However, from line 181-183, its output size seems to be 1/16, which is confused for me.**\n\n*A: We employ the QD and QU to reduce the computational cost in version \\(c\\) of Figure 3. The length of the q determines the spatial size of the output. We first apply a smaller q (QD) to reduce the computational cost of the backbone (from 1/16 to 1/32), then we apply a larger q (QU) to upsampling the final feature map. With this simple structure, we can reduce the computational cost while maintaining performance. The accuracy will be significantly reduced if this QU and QD are replaced with a standard convolution up-sampling and down-sampling.*\n", " We thank the reviewer for the time and effort. We address the concerns proposed by Reviewer Eyo8 in detail.\n\n**1) I doubt the novelty of the design of ATM module. Since the MaskFormer framework has been proposed for over half a year, the ATM module is similar to the MaskFormer...**\n\n*A: We made ablation studies on transferring maskformer and mask2former to ViT backbones. For maskformer, we adopt our code base using mmsegmentation using the exact same hyper-parameters that we use, the performance on ViT-base is (46.7 maskformer vs 51.2 Ours). For mask2former, we use the detectron2 code base and inherit all the hyper-parameters that mask2former original code use just changes the backbone to Vit-base, the performance is 48.3.\nThe design of this paper is based on the non-hierarchical ViT backbone to solve the semantic segmentation problem. Our conceptual difference lies in the assumption that we assign each class to one fixed class token and generate the mask of the specific class directly. While maskformer depends on Hungarian matching and each learnable query corresponds to the spatial information rather than category information. With this difference in assumption, our ATM module do not need to add positional embedding like maskformer.\nAlso, our major contribution claimed is to develop a lightweight framework that is efficient and effective on plain backbones. Experiments show that our method achieved SOTA performance using the least amount of computational cost among all existing plain backbone methods. Please refer to the response to reviewer YDPR for more details.*\n\n**2) Table 4 shows that ATM has a relatively low-performance gain of about 0.5\\% to SETR. It shows that the performance of ATM is even worse than Segmenter (Since the result of Segmenter is 0.8\\% better than SETR in Talbe 1)?**\n\n*A: Table 4 is an ablation study to demonstrate the effect of different losses on ATM structure. We utilize the ATM module to propose a structure that is capable of being lightweight and effective. The contribution of the paper is not limited to the design of ATM. How to apply the ATM in the cascade and shrunk structure is also a non-trivial work. In Table 1 we show that our proposed structure has higher performance than Segmenter while having lower computational cost.*\n\n**3) Also, in table 4, it shows that by using $L_{mask}$ loss the mIoU result increases about 2.6\\% than using CE loss only. However, Table 1 shows that the result of SegViT is about 2.3\\% better than Segmenter baseline (which only uses CE loss). If it shows that the performance gain is all from the new loss design but not from the framework architecture.**\n\n*A: We have a class token from the output of the transformer block and a segmentation mask from the attention map. These two outputs can be merged into one logits map to enable training with CE loss. But if the network can only generate a pixel level logits map, like in SETR and Segmentor, the mask loss can not be successfully applied. The results in Table 4 demonstrate that supervising the attention masks and the class tokens separately is a more natural way. This supervision is enabled by the ATM module in a simple and elegant way.*\n\n\n**4) Considering the similarity of MaskFormer and SegViT, I am curious about the result of MaskFormer + ViT + multi-layer-feature (use the same three ViT layer inputs to MaskFormer transformer decoder). I think the result may be similar with SegViT.**\n\n*A: We made ablation studies on transferring maskformer and mask2former to ViT backbones. For maskformer, we adopt our code base using mmsegmentation using the exact same hyper-parameters that we use, the performance on ViT-base is (46.7 maskformer vs 51.2 Ours). \nFor mask2former, we use the detectron2 code base and inherit all the hyper-parameters that mask2former original code use just changes the backbone to Vit-base, the performance is 48.3. Please refer to the response to reviewer YDPR for more details. \nThe low performance is possible due to 1, the use of Hungarian matching that brings slow convergence which is unnecessary for semantic segmentation, and 2, the use of FPN which introduced unnecessary parameters.*\n\n**5) In the paper, it claims that the SegViT framework has a quite good performance with Swin Transformer backbone and lists the results with Swin-Tiny. I am curious about SegViT + Swin-Large result and its comparison with MaskFormer and Mask2Former.**\n\n*A: The performance for the proposed structure and the MaskFormer are similar on Swin-Large but has a large performance gap on ViT backbones. That is because we design our structure for non-hierarchical ViT backbone with a large feature map resolution. The cascade and shrunk structures are all sepcifically designed for ViT structures. It is meaningful to explore the potential of the plain backbone. For example, the plain backbone is widely used in large pre-trained tasks. Our structure can be easily combined with CLIP or other mutimodel learning.*\n", " We thank the reviewer for the time and effort. We address the concerns proposed by Reviewer oFbW in detail. \n\n**1) While the results are good, related work like Segmenter is not far off from the performance presented here and shares some significant similarities with this method.**\n\n*A: We aim at proposing efficient segmentation networks with a plain backbone. Segmenter employs the class tokens as the input of the ViT backbone, which we found it unnecessary and low efficient. By introducing the class tokens as the input of the ATM, and supervising the attention map, our computational cost is much smaller than Segmenter (Gflops w/o. backbones: ours 25 vs Segmenter 59) while we outperform Segmenter by 1.4\\% on ADE20k dataset.*\n\n**2) I'm slightly confused about why one needs class predictions on the output tokens? \nIs this because the attention mechanism is bad at producing low confidence values when the class does not exist? Do you just multiply this probability by the sigmoid output?**\n\n*A: Yes, not only the attention mechanism, the mask loss which is originally used by DETR is not good at producing low confidence values and requires the class predictions to indicate whether the specific class exists in the input image or not. This probability is just multiplied by the mask which is the sigmoid output of the attention.*\n\n**3) Table 4 could be more convincing. The loss formulation is not novel, so why not also apply it to SETR?**\n\n*A: As in the question above, the mask loss is the combination of dice loss and binary cross-entropy loss for every single mask that has the ground truth. If there is not a class prediction to indicate, the performance can be rather low (37.4 applied to SETR, SETR 46.5 for regular CE loss)*\n\n**4) The idea is still related to the idea of dot product-based segmentation (from some class embedding). I think a good deal of experiments might need to be performed to actually understand the technical contribution.\nIt seems reasonable to provide a \"dot product\"-based baseline (similar to the decoder in Segmenter) to tease where attention (which itself is a dot product to some degree in this scenario) actually makes a difference.**\n\n*A: We thank the reviewer for the great suggestion. We run a variation of our work by simply changing the output from the attention mask to the dot product. The result decreased from 51.2 to 50.6 on Vit Base. This demonstrates that the attention output can improve the final results. Moreover, we observe with the ATM, the network can converge faster compared to the \"dot product\" baseline.*\n\n**5) Does this work when just supervising the aggregated output (a single per class segmentation loss)?**\n\n*A: No it won't work. The classification logits are needed to indicate the existence of its class. In the dataset, there can be a lot of similar categories e.g. house VS building, lake VS river VS sea. They can all have similar masks. The classification logits provide indications of which class to take.*\n\n", " **3) SegViT still utilizes FPN-like architecture to fuse low-level and high-level features ...**\n\n*A: The architecture, even though takes in features from multiple layers as input, is not in the purpose of FPN, of which the intention is to merge the features from multiple layers and generate a stronger feature map. In our structure, every layer is directly applied to the ATM and then the generated masks are summed. There is no feature maps merge process. Also, in the experiment, we found the information that the lower level provides are 'refining' the output (with most areas zero and only having value on edge areas), rather than 'merge'.\nWe also made ablation studies on expanding the resolution of the last layer as the VitDet. There was no obvious improvement. It's can partially because the semantic segmentation task is not as sensitive to multi-scale as the detection task. Also, without larger-scale feature maps (1/16 for all in ViT) a deconvolution or bilinear upsampling is not able to provide more semantic information.*", " **1.1) The overall framework is very similar to MaskFormer [15] and Mask2Former [47] ...**\n\n*A: We thank the reviewer for the time and effort. We address the concerns proposed by Reviewer YDPR in detail.\nThe design of this paper is based on the non-hierarchical ViT backbone to solve semantic segmentation problem. The main difference with the MaskFormer are summarized as follows:*\n1. *Our conceptual difference lies in the assumption that we assign each class to one fixed class token and generate the mask of the specific class directly. While maskformer depends on Hungarian matching and each learnable queries corresponds to the spatial information rather than category information.*\n2. *Our ATM module do not need to add positional embedding like MaskFormer, because we make use of the attention map between the class token and the feature map.*\n3. *Maskformer is highly dependent on the typical CNN structures, for example, FPN is used to merge the features from hierarchical backbones. However, in our work, we enhance the representation ability of the class tokens by transformer blocks. The masks are generated by the sum of the multi-layer attention maps.* \n\n*Also, our major contribution claimed is to develop a lightweight framework on plain backbones. Plain backbones have a lot of potential applications, for example, linking the text with the images (CLIP), unsupervised learning with larger scale datasets and so on. We are making use of the inartistic features of the vision transformers. Experiments show that our method achieved SOTA performance using the least amount of computational cost among all existing plain backbone methods.* \n\n*Besides, to further reduce the computational cost that the Vit backbone brings, we introduced the Shrunk structure, which is able to reduce computational cost on the backbone while still using the original pre-train weights. In table 7, we showed that for ViT backbones, regular methods like using a convolution layer to downsample the feature map, can seriously decrease the performance and make the pre-trained weights no longer effective. While using our Shrink structure, the performance can still be maintained. This enables the extension of the larger pre-trained models, e.g., CLIP, DINO, instead of training a new one from scratch.*\n\n**1.2) Although [15, 47] mainly evaluate on hierarchical backbones, theoretically they can also be equipped with plain networks.**\n\n*A: Transferring the decoupled output type of [15, 47] is not a **trivial work**. A straightforward transfer can not obtain competitive performance. We made ablation studies on transferring MaskFormer and Mask2Former to ViT backbones. The results are shown in the following table:*\n\n| Method | Backbone| mIoU on ADE20k|GFLOPs wo Backbone|\n| :---- | :----: |:----: |:----: |\n|MaskFormer| Vit-base|46.7 | 65|\n|Mask2Former |Vit-base| 48.3 | 43|\n|Ours | Vit-base| 51.2 | 32|\n\n**1.3) In addition, the proposed QU/QD layers are not novel ...**\n\n*A: For the QD, our approach is to reduce the size of the query tokens while keep the resolution of the keys and values.*\n*When passing through a transformer layer, the values are weighted and summed by the attention map between query tokens and the key tokens. This is non-linear downsampling that will pay more attention to the important regions. While in PVT, they reduced the spatial size of keys and values by a convolution layer with strides, this is a grid sampling that reduces the feature map regularly. Moreover, PVT focuses on designing a new hierarchical backbone and trains the backbone from the scratch. While we want to directly use the existing pretrain weights of plain backbones to enable the extension to other tasks. Also, in table 7 we demonstrated how additional convolution with strides downsampling will seriously reduce the performance and how QD compares to the simple nearest downsample over qkv (q vs qkv, 52.6 vs 53.9).*\n\n**2) According to Table 4 and Line 234-239, in the proposed ATM block, the separated...**\n\n*A: ATM is a way to enable the network to be supervised with the new loss. Our final mask (attention map) is generated related to every single semantic class. It is not a per-pixel representation. Thus, supervising with CE can not achieve a good performance.\nOur main contribution is to discuss and propose a lightweight and effective structure for plain backbones like ViT. It is a simple and elegant solution with competitive results on ViT backbones.\nDirectly applying the MaskFormer decoder can not achieve competitive performance.*\n\n\n\n\n\n\n\n", " The paper introduces SegViT, a semantic segmentation framework with plain ViTs as backbones. One of the core technical contributions is the proposed Attention-to-Mask (ATM) block, which generating masks from the intermediate attention maps between class embeddings and key maps. In addition, a shrunk structure is then proposed to save computational cost while maintaining the performance. Based on plain ViT networks only, SegViT obtains state-of-the-art results on three semantic segmentation datasets (ADE20K, PASCAL-Context and COCO-Stuff-10K). Pros\n\n1. The paper is well motivated. Recently many works (e.g. [*1]) have realized even plain ViTs could have rich representation capacity, which however requires special optimization (e.g. masked image modeling) or other architectural modifications for downstream tasks. I am pleased that the paper demonstrates that plain ViTs can obtain as good results as the hierarchical counterparts (e.g. [15, 47]) on segmentation tasks, which may encourage simpler and unified network design principles. \n\n2. Strong results are reported in the paper. For example, on ADE20K val a model with ViT-L backbone achieves 55.2 mIoU, which is very competitive even among more sophisticated networks, such as Swin-L and MViT. \n\n3. The motivation of ATM module sounds reasonable to some extent: intuitively a good attention mask should cover the foreground of the given object (or class). Therefore, it is possible to generate mask directly from the attention matrix. \n\nCons\n\n1. My major concern is that the technical novelty is relatively limited. The overall framework is very similar to MaskFormer [15] and Mask2Former [47]. Compared with [15], the major difference on the technical details is, [15] generate masks from the product of the mask embedding and the per-pixel embedding, while in the paper the mask is directly derived from the attention weights. However, I do not think it differs much. Although [15, 47] mainly evaluate on hierarchical backbones, theoretically they can also be equipped with plain networks. In addition, the proposed QU/QD layers are not novel (also sounds irrelevant to the main topic of the paper), since many previous works, e.g. PVT [17], also adopt similar blocks to reduce computational cost. In conclusion, I think the contributions claimed in the introduction seems not significant.\n\n2. According to Table 4 and Line 234-239, in the proposed ATM block, the separated supervision of classification and mask prediction is the most important design principle. However, it is not originally proposed in the paper as [15, 16] already introduces the paradigm. It further weakens the significance of the proposed method. \n\n[*1] Li et al. Exploring Plain Vision Transformer Backbones for Object Detection. Tech report. \n SegViT still utilizes FPN-like architecture to fuse low-level and high-level features from the backbone. However, some previous works [*1, *2, *3] challenge the essential of such feature fusion. It will be interesting if the authors show experiments where ATM module only relates to the last layer of ViT encoder. For example, like the experiments in Table 5, the authors may try a cascade design while utilizing the last layer multiple times, or just follow ViTDet fashion [*3].\n\n[*1] Zhang et al. ExFuse: Enhancing Feature Fusion for Semantic Segmentation. ECCV 2018. \n\n[*2] Chen et al. You Only Look One-level Feature. CVPR 2021.\n\n[*3] Li et al. Exploring Plain Vision Transformer Backbones for Object Detection. Tech report. \n Limitations are mentioned in the conclusion. Although I think more discussion and comparisons with [15] are required in the paper. ", " The authors deal with ViT-based semantic segmentation. In particular, they use a set of learnable tokens (each corresponding to a semantic class) which decode the outputs of the ViT-based backbone into per-class semantic masks. This is accomplished by multiple layers of cross attention between class token and ViT tokens. Rather than use a dot product like mechanism to produce similarity between a class token and spatial features, they directly supervise the cross attention maps using a sigmoidal output. Furthermore, they introduce a down/upsampling technique to mimic the general idea of an efficient multi-scale prediction head. Their results are quite good even when compared to some of the best recent models and their QD module provides some computation/performance tradeoffs. Strengths:\n\n1. This is a well written paper and the approach is quite clean\n2. The results presented are quite good as well, achieving at/near SOTA against competitive models\n\nWeaknesses\n\n1. The idea is still related to the idea of dot product based segmentation (from some class embedding). I think a good deal of experiments might need to be performed to actually understand the technical contribution.\n2. While the results are good, related work like Segmenter is not far off from the performance presented here and shares some significant similarities with this method. 1. I'm slightly confused about why one needs class predictions on the output tokens? Is this because the attention mechanism is bad at producing low confidence values when the class does not exist? Do you just multiply this probability by the sigmoid output?\n2. Table 4 could be more convincing. The loss formulation is not novel, so why not also apply it to SETR?\n3. It seems reasonable to provide a \"dot product\"-based baseline (similar to the decoder in Segmenter) to tease where attention (which itself is a dot product to some degree in this scenario) actually makes a difference.\n4. Does this work when just supervising the aggregated output (a single per class segmentation loss)? I believe so.", " The paper proposed a plain-ViT based semantic segmentation framework, which uses an Attention-to-Mask decoder to aggregate image features and a Shrunk structure to save computational cost. Strengths:\n1. The paper proposed SegViT framework and achieved a SOTA performance based on a plain ViT backbone. \n2. The paper is well written and clear to understand.\n\nWeaknesses:\n1. I doubt the novelty of the design of ATM module. Since the MaskFormer framework has been proposed for over half a year, the ATM module is similar to the MaskFormer transformer decoder module. The only difference is that the mask output of MaskFormer is generated by the multiplication of the final output query tokens and image features, while the mask output of ATM is from the multiplication of an intermediate variable K inside the transformer layer and the image features. The difference is not obvious. The SegViT framework is just like MaskFormer + ViT + multi-layer-feature.\n2. Table 4 shows that ATM has a relatively low performance gain of about 0.5% to SETR. It shows that the performance of ATM is even worse than Segmenter (Since the result of Segmenter is 0.8% better than SETR in Talbe 1)?\n3. Also, in table 4, it shows that by using L_mask loss the mIoU result increases about 2.6% than using CE loss only. However, Table 1 shows that the result of SegViT is about 2.3% better than Segmenter baseline (which only uses CE loss). If it shows that the performance gain is all from the new loss design but not from the framework architecture. 1. Considering the similarity of MaskFormer and SegViT, I am curious about the result of MaskFormer + ViT + multi-layer-feature (use the same three ViT layer inputs to MaskFormer transformer decoder). I think the result may be similar with SegViT. \n2. In the paper, it claims that the SegViT framework has a quite good performance with Swin Transformer backbone and lists the results with Swin-Tiny. I am curious about SegViT + Swin-Large result and its comparison with MaskFormer and Mask2Former.\n The authors have addressed the limitations and potential negative societal impacts.", " This paper present a semantic segmentation method based on the plain vision transformer (ViT). Specifically, it proposes the attention-to-mask (ATM) module to generate the pixel-level mask. In addition, to reduce the computational cost, it designs a query-based downsampling (QD) and upsampling module (QU) in the shrunk version. Experiments are conducted on three datasets and better results are obtained compared with previous methods. **Strength**\n\n1. Exploring plain architecture for semantic segmentation is an interesting and promising direction. This paper make a forward step towards this direction.\n2. The performance of SegVIT seems to be better than previous state-of-the-art methods.\n\n**Weakness**\n\n1. About the Attention-to-Mask module (ATM), it is implemented in cross-attention manner. But in fact, there is little difference with the standard classifier (a fc layer to map features to probability) for per-pixel classification in the normal semantic segmentation framework. Each learned token could be viewed as a classifier layer to map the pixel-level features into a probability with a Sigmoid function. In this sense, the ATM is similar to a standard classification layer.\n\n2. About the Shrunk structure, I am confused about the query-based downsampling operation (QD). In line 172, it says to use the nearest sampling to reduce the token numbers. In this sense, it has nothing with the query based downsampling and is simply a standard downsampling operation. I am also confused about the implementation details on query-based upsampling operation (QU). It says to use a standard transformer decoder structure to upsample features. Is there any special design on the transformer decoder by incorporating the spatial information? More details are required on the decoder design.\n\n3. About the two QU operations in the version (c) of Figure 3, the downsampling ratio of lower QU is 1/16, and it is natural to think its output have smaller downsampling ratio like 1/8. However, from line 181-183, its output size seems to be 1/16, which is confused for me.\n\n4. I think this paper should compare with the previous works PerceiverIO, which employs a similar downsampling-upsampling architecture for dense prediction with transformers. More discussion on the difference is required to better motivate the proposed method.\n Please address the above concerns in the weakness. The authors have addressed the limitations of the proposed method in large memory consumption." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "8VKrIqiMdrL", "8VKrIqiMdrL", "IV_a0jjqaF", "706DyYC1nDI", "LULpVItfJA", "e3Nm3Cna_Za", "9rJIby3AtR-", "nips_2022_4R7YrAGhnve", "PXdCd0hmnqE", "GNPF-8kiwGv", "wb7UM6E56K0", "chxcAVxKNhp", "UtxgVeF5OWb", "nips_2022_4R7YrAGhnve", "nips_2022_4R7YrAGhnve", "nips_2022_4R7YrAGhnve", "nips_2022_4R7YrAGhnve" ]
nips_2022_Vt3_mJNrjt
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
This paper presents a new efficient black-box attribution method built on Hilbert-Schmidt Independence Criterion (HSIC). Based on Reproducing Kernel Hilbert Spaces (RKHS), HSIC measures the dependence between regions of an input image and the output of a model using the kernel embedding of their distributions. It thus provides explanations enriched by RKHS representation capabilities. HSIC can be estimated very efficiently, significantly reducing the computational cost compared to other black-box attribution methods. Our experiments show that HSIC is up to 8 times faster than the previous best black-box attribution methods while being as faithful. Indeed, we improve or match the state-of-the-art of both black-box and white-box attribution methods for several fidelity metrics on Imagenet with various recent model architectures. Importantly, we show that these advances can be transposed to efficiently and faithfully explain object detection models such as YOLOv4. Finally, we extend the traditional attribution methods by proposing a new kernel enabling an ANOVA-like orthogonal decomposition of importance scores based on HSIC, allowing us to evaluate not only the importance of each image patch but also the importance of their pairwise interactions. Our implementation is available at \url{https://github.com/paulnovello/HSIC-Attribution-Method}.
Accept
The paper proposes a novel black-box explanation method. The proposed method uses HSIC to measure the dependence between randomly-masked inputs and the corresponding outputs, and identifies relevance patches. Based on the decomposition property, the proposed method can also find interactions between patches. Experiments quantitatively show that the proposed method outperforms (or is comparable on some evaluation measures) existing black-box methods with less computation costs. Quantitative gains are demonstrated by finding the cause of wrong prediction in an object detection task, and interaction between patches. Reviewers raised concerns mainly on clarity, which the authors well addressed. I expect that the presentation of the final version will be much clearer. A good paper with an interesting idea of using HSIC, which brings benefit on the explanation performance and computation time. Furthermore, it allows explaining interactions, which most existing methods do not. The advantages of the proposed methods are demonstrated quantitatively and qualitatively.
train
[ "zqOQUOBZknr", "h0SInk7yyAT", "7USJWAHwToJ", "t3QM193nJl5", "e4fUNmDOHgq", "FmT8peNmHsm", "1j1Fs7y8Qf4", "z-bNZzewpa", "TdOwa78HFCF", "I9utDw2FS9", "yHBitF8bvq_Q", "nS_1l_92KlQ", "n47yNBtYXI", "J4uRUY0SjSF", "jg052CJhnFs", "MxOtw2o5BfL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response and further explanations. I like the clarification on the patch interaction part. I increase the score accordingly.\n\n", " Thank you for addressing my concerns. This work is interesting and adds value to the field. ", " Thank you for the additional explanations. I will take them into account in the final discussion and will update my review accordingly.", " Thank you for your response. The current page limit will make it difficult to add more details to the revised manuscript, so we include our suggestions in this response and will add them to the paper upon acceptance. \n## W1\n* Line 130, in place of \"For each image patch, the mask values define a drift from the original image towards the baseline value µ.,\" we suggest writing: \"Hence, the mask $\\mathbf{M}$ aggregates the patch-wise random perturbations $M_i$ that are sampled independently for each patch ($M_i$ are iid). In practice, the perturbations contained in the mask are binary perturbations, to simulate whether the information contained in the patch is kept in the image or not.\"\n* In Sec 3.2, the $x_i$'s denote general samples of any random variable. This notation is used to introduce the general definition of $\\mathcal{H}^p_{\\\\textnormal{x}, \\\\textnormal{y}}$. Indeed, each $x_i$ does not denote a mask but a patch, so we remove the confusing words \"(e.g. random masks).\". In return, we further expand the added description l. 162 \"... complexity. *In this work, the input variables $x_i$ are the patch perturbations $M_i$.* Therefore, we compute ... \"\n* We keep the most cited references.\n## W2\n* L. 604, we indeed write that \"a negative value has no meaning since the metric is a distance\". This negative value is obtained using the formula $HSIC_{inter}( x_{1,2} , y) = HSIC(x_{1,2} , y) - HSIC(x_1, y) - HSIC(x_2, y)$ in a case where the decomposition property is not satisfied. HSIC is supposed to be a distance, so obtaining a negative value demonstrates that this formula does not make sense without this property.\n* When the decomposition property does not hold, the formula does not make sense. Therefore, even if thresholding HSIC to be positive would lead to the correct conclusion in the present example, nothing guarantees that the opposite effect, where the interactions would be overestimated, will not occur in another case.\n* We corrected the typos.\n* The metrics differ because they are not computed with the same kernel (RBF vs Sobolev). In addition, for HSIC values that are so low, the approximation error might have some impact on the estimation. \n## Q1-2\nWe will add this information to the main paper upon acceptance, thanks to the increased page limit. \n\nEmpirically, the figure in Sec 4.2 shows that until a high number of samples, the explanation maps for RISE and Sobol poorly correlate with the \"asymptotical baseline\". This alone indicates that the HSIC explanation is converging faster to its final state (and thus its final fidelity score) whereas RISE and Sobol are oscillating and suffering from high estimation noise before converging. Theoretically, for Sobol, Theorem 1 of [1] states that the estimation error is in $\\\\mathcal{O}(\\frac{1}{\\sqrt{p}})$ for $p \\times (d+2)$ samples, with $d$ the number of patches. In the Sobol paper [2], the number of patches considered is $121$, so for 764 samples, the estimation error would be in $\\\\mathcal{O}(\\frac{1}{\\sqrt{6}})$ (because $p \\times (d+2) = 764$ leads to $p \\approx 6$ for $d = 121$). In that case, evaluating the performances of these metrics would not make sense: it could be better or worse, out of randomness. \n\nNonetheless, we include a new study in Appendix G, showing the evolution of Deletion for HSIC, RISE and Sobol with respect to the number of samples. It shows that RISE and Sobol need more samples to reach their final deletion score.\n## Q3\nThis discussion is included in Appendix F.2. Upon acceptance, we will include it in the paper.\n\nThank you for this constructive discussion. We believe that, as a result, our paper will be significantly improved.\n\n[1] Making best use of model evaluations to compute sensitivity indices, Saltelli, Computer Physics Communications, 2002\n\n[3] Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis, T. Fel, R. Cadene, M. Chalvidal, M. Cord, D. Vigouroux, and T. Serre, NeurIPS 2021\n", " ### RE W1: Technical communication\n\nI appreciate the updated text in Sec 3.1 and Sec 3.2.\nThere's been good progress toward addressing my concerns, but there remain some lingering concerns:\n\n* In Sec 3.1: I think M still needs a \"plain talk\" definition... yes it is a binary vector in d-dim space, but what does it represent? is it just indicating over all possible patches of a certain size, which ones are masked and which are not? I think even the revised notation is trying to be too general and is going to confuse many readers, even those with lots of image processing experience.\n\n* In Sec. 3.2: I still find it disorienting that previously in Sec 3.1 symbol \"x\" denoted image pixels, but now in 3.2 \"x\" denotes a mask (line 159 of revised version from Aug 1). You could at least prepare the reader to make this mental switch.\n\n* (not that important): citing 5 papers [17 , 40 , 61, 36, 13 ] seems excessive for defining a simple inpainting operation\n\n\n## RE W2: Orthogonal decomposition property in particular is not well explained or justified\n\nThe new example in the appendix is helpful, thanks!\n\nA few issues remain though, looking especially at Table 2 in App A:\n\n* How can HSIC be negative, if it is defined as a distance? Is this an error in approximation or something? (I am likely missing something obvious...)\n* Suppose I decide in practice to treat any negative HSIC values as a 0 (since negative values don't \"make sense\"). Don't I essentially then draw the same conclusions using both RBF (where the property does NOT hold) and Sobolev?\n* The column titles in Table 2 have some typos: (x2,y) is duplicated\n* Shouldnt the HSIC metrics in Table 1 and Table 2 of App A match for x2, y when using the RBF kernel? Why do they differ? \n\n## RE Q1-2: Fair comparisons\n\nOK so you use previously picked \"defaults\" that prefer accuracy over speed. I guess this should just be stated more clearly, perhaps indicating in the caption/table what you did for each method.\n\nIdeally, you could include an \"efficient\" version of RISE/Sobol that uses a much lower number of perturbations. Perhaps the figure in Sec 4.2 is enough to suggest this won't work very well, but that figure measures something very different than what's reported in Tab 1.\n\n## RE Q3 Why deletion but not insertion\n\nThanks for the detailed investigation. Please include in the final paper", " Thank you very much for your detailed feedback. We address your comments and questions following the outline of your review.\n## Comments \n* We agree that the terminology of black-box / white box attribution method may be confusing, but that this is how they are referred to in the recent literature (e.g. [3,4,5]). Hence, we prefer to keep this formulation for consistency with prior works.\n* Following your guidance, we have made some parts of the paper clearer.\n* The reference [18] is not used to discuss the gradients but to justify the fact that considering infinitesimal perturbations can be misleading.\n## Questions\n\n**1.** We introduce an illustrating toy example in an openreview general comment above **\"Motivating example for orthogonal decomposition property\"**. This example clarifies the motivations for assessing the interactions and shows why the orthogonal decomposition property, granted by Proposition 1, is necessary to be able to do so. It demonstrates that without this property, applying a standard method to a pair of patches would not actually measure the importance of interactions\n\n**2.** In the paper, we explicit that Sobol estimator theoretically needs $p \\times (d+2)$ samples to be estimated with an error in $\\mathcal{O}\\Big( \\frac{1}{\\sqrt{p}}\\Big)$, while HSIC only needs $p$ samples. These statements are referenced in [1] for Sobol, Theorem 1, and in [2] for HSIC, Theorem 1. To the best of our knowledge, they are the only methods that enjoy Theorems for their approximation guarantees. We add the mention of the theorems in the manuscript.\n\n**3.** As it is mentioned in the paper (l.260), $\\mu$Fidelity Table has been postponed in the appendix because standard estimation errors were particularly high. \n\n***\"The proposed approach works best with respect to Deletion, but not with respect to Insertion, which is dominated by RISE. This indicates that there could be a tradeoff between Insertion and Deletion similar to Precision and Recall\"***. \nSince Deletion measures a drop in the score, the faster the score drops, the better the metric. Hence, Deletion will favor methods that sharply identify important regions. On the contrary, since Insertion starts from an arbitrary baseline image, if the explanation map is more spread out, more relevant secondary information will be added, so the score will be better. As we can see in the maps of Appendix C, RISE saliency maps are way more spread out than HSIC's, which are sharper. It may explain why RISE is better in the Insertion benchmark and why HSIC attribution method dominates the Deletion benchmark (Section 4.1 Table 1). We provide additional quantitative examples to illustrate this link in Appendix F. Note that even if RISE dominates Insertion, it is far behind in Deletion. This is not the case for HSIC, which is still competitive in Insertion while dominating Deletion. We add those discussions in Appendix F.\n\n**4.** The experiment of Section 4.2, assessing the runtime of HSIC w.r.t. other methods, is inspired by the paper [3], which compares the convergence speed of Sobol with RISE. The convergence is assessed by computing the correlation of the explanations for an increasing number of samples (axis Forward in Figure 2) with what we call a \"baseline asymptotical explanation\". Ideally, this \"baseline asymptotical explanation\" should be the explanation obtained with $p \\rightarrow +\\infty$. Since we cannot obtain this baseline, we approximate it with a very high number of samples (way larger values used in practice), here 13,000. As we explain in the footnote of Section 4.2, we use quotations between \"asymptotical\" because it is not theoretically asymptotical\n\n[1] Making best use of model evaluations to compute sensitivity indices, Saltelli, Computer Physics Communications, 2002\n\n[2] Measuring Statistical Dependence with Hilbert-Schmidt Norms, Gretton et al, Proceedings of the 16th International Conference on Algorithmic Learning Theory, 2005 \n\n[3] Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis, T. Fel, R. Cadene, M. Chalvidal, M. Cord, D. Vigouroux, and T. Serre, NeurIPS 2021\n\n[4] Rise: Randomized input sampling for explanation of black-box models. V. Petsiuk, A. Das, and K. Saenko. In Proceedings of the British Machine Vision Conference (BMVC), 2018.\n\n[5] Black-box explanation of object detectors via saliency maps, V. Petsiuk, R. Jain, V. Manjunatha, V. I. Morariu, A. Mehra, V. Ordonez, and K. Saenko. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", " ## Limitations\n\n* In Table 1, we report the value of Insertion and Deletion metrics for HSIC method and for different grid sizes, obtained after a grid search for MobileNetV2 on 1000 ImageNet validation images. The metrics are averaged over 27 runs.\n\n| grid size | 5 | 6 | 7 | 8 | 9 | 10 |\n| -------------------------- | ---- | ---- | ---- | ---- | ---- | ---- |\n| Insertion $\\times 10^{-1}$ | 4.14 | 4.02 | 3.90 | 3.72 | 3.54 | 3.40 |\n| Deletion $\\times 10^{-1}$ | 1.01 | 0.97 | 0.94 | 0.93 | 0.92 | 0.90 |\n**Table 1**: Result of a grid search for MobileNetV2. \n\nThese results show the effect of the grid size on the metric. Note that it corroborates the previous observation that a sharp explanation (high grid size) favors Deletion, and a spread-out explanation (low grid size) favors Insertion.\n\n* The choice for $k$ is defined by Proposition 1, which is a condition to using the decomposition property so it cannot change. The choice of $l$ may change, but RBF is largely adopted in the community when working with HSIC or RKHS.\n* The choice of the bandwidth of $l$ could be performed in a more principled manner (e.g. by maximizing the maximum mean discrepancy, as recommended in [3]), but it would affect the efficiency of the method (one optimization for each explanation), so we chose to follow the common practice of selecting the median.\n\n[1] Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. Advances in Neural Information Processing Systems (NeurIPS), 2018\n\n[2] Amirata Ghorbani, Abubakar Abid, and James Zou. Interpretation of neural networks is fragile. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2017\n\n[3] Kernel choice and classifiability for rkhs embeddings of probability distributions, Fukumizu, K., NeurIPS 2009\n", " Thank you for your detailed feedback. We have addressed the minor issues directly in the revised version. The answers are organized following the outline of your review.\n## Weaknesses\n### W1 - Technical communication issues\n#### Section 3.1\n1. There is indeed a clash between dimensions $n$ and $d$. We solved this problem by adding another type of mask $M'$ constructed out of $M$, which is an upsampled version of $M$ to match the dimension $n$ of the input image. \n2. For the baseline value $\\mu$, we use a black image (with value 0). \n#### Section 3.2\n1. Indeed, the obtained indices $\\mathcal{H}_i$ are scalars that are computed for each individual image patch. We make it clearer in the description of Eq. 2.\n2. We follow your suggestion and make the notations easier to read by denoting the input image by $X$ instead of $x$.\n### W2 - Orthogonal decomposition is not clear\nWe included a new pedagogical example in an openreview general comment above **\"Motivating example for orthogonal decomposition property\"**. This example shows the importance of assessing the interactions and how the orthogonal decomposition property is necessary to do so. Note also that a more detailed description of the orthogonal decomposition property can be found in Appendix A. \n\nThe formulation ***\"When using Sobol indices based GSA, the sum of the indices of all possible $\\textnormal{\\textbf{x}}_A$ is equal to 1\"*** refers to the ANOVA / orthogonal decomposition property of Sobol indices, which are often normalized so that their sum is equal to 1. In the sensitivity analysis literature, this way of referring to the decomposition property is sometimes used to convey that the indices measure the independent (orthogonal) effects of variables. We plan to describe the decomposition property more thoroughly in the main document upon acceptance so that there is less confusion about these wordings.\n### W3\nWe already included visualizations of explanations for kernelShap and RISE in Appendix C.\n### W4\nAttribution methods aim at explaining the output of a neural network, not finding human-level explanations. When the method emphasizes that a Puma's mouth is more important than its eyes, it only means that it is **from the perspective of the neural network**. In fact, there is a broad consensus that measuring the human-level plausibility of an explanation alone is not sufficient [1, 2]. The goal of explainability is to uncover the true underlying decision process of a model, not the consensus with a human explanation. \n## Questions\n\n***1. What was done to ensure fair comparison of timings in Table 1?*** \nThe numbers of perturbations used in RISE and Sobol are those of the original papers, i.e. 8000 for RISE and 4900 for Sobol. We just took the metric from their original papers, and therefore we used the same setting. Moreover, Section 4.2 shows that RISE and Sobol need more samples to obtain relevant explanations.\n\n***2. In Table 2, why use 5000 samples for D-RISE but fewer for your method?*** \nAs above, we also took the same parameters as the D-RISE's paper. \n\n***3. What does it mean for a method to score better on Insertion than Deletion?*** \nSince Deletion measures a drop in the score, the faster the score drops, the better the metric. Hence, Deletion will favor methods that sharply identify important regions. On the contrary, since Insertion starts from an arbitrary baseline image, if the explanation map is more spread out, more relevant secondary information will be added, so the score will be better. As we can see in the maps of Appendix C, RISE saliency maps are way more spread out than HSIC's, which are sharper. It may explain why RISE is better in the Insertion benchmark and why HSIC attribution method dominates the Deletion benchmark (Section 4.1 Table 1). We provide additional quantitative examples to illustrate this link in Appendix F. Note that even if RISE dominates Insertion, it is far behind in Deletion. This is not the case for HSIC, which is still competitive in Insertion while dominating Deletion. We add those discussions in Appendix F.", " Thank you for your positive feedback. As you noticed, finding quantitative comparison is challenging without ground truth. That is why beating the state-of-the-art in terms of Insertion and Deletion was not our only focus:\n\n* Our method is far more efficient than other SOTA black-box methods\n* Bringing the RKHS framework to the domain of AI Explainability (XAI) enables many research perspectives\n* This framework makes it possible to assess the importance of patch-wise interactions.\n\nYou and the other reviewers shared concerns about the clarity of the last point, so following your suggestion, we introduce a simple pedagogical example described in an openreview general comment above **\"Motivating example for orthogonal decomposition property\"**. We recommend you to read it before going on with this response. \n\n**Q1:** ***Shouldn't attributions from each image patch overlap with the attributions from interactions?*** \nThe fact that $\\mathcal{H}_i$ and $\\mathcal{H}_{i\\times j}$ are different is an empirical example that sometimes, interactions are as important as individual effects. The additional motivating example provided shows a simple case where it can happen. It also shows that the orthogonal decomposition property allows properly assessing interactions by removing individual effects from interaction effects, which explains why the heatmaps do not always overlap.\n\n**Q2:** ***Is there an XOR situation that could highlight that Hixj is able to uncover unique interactions that would be missed by attributions of individual patches?*** \nWe followed your suggestion and provided an XOR example where unique interactions would be missed by a classical attribution metric (cf **\"Motivating example for orthogonal decomposition property\"** in openreview general comment above)\n\n**Q3:** ***Is there a way to quantify expected interactions?*** \nThe orthogonal decomposition property allows to quantify these interaction effects properly and even to compare their importance with that of individual effects, as shown by the motivating example. For instance, in Section 4.4, thanks to this property, we are able to compare the importance of the interactions of the mustaches of a puma with its eye (and we find out that the interactions are more important).", " Thank you for your detailed feedback. We have addressed the minor issues directly in the revised version.\n## 1. Comments on decomposition property\n### HSIC attribution method does not require Proposition 1 to operate\n***\"Is it a condition to enable the incorporation of HSIC metric in this specific scenario?\"*** Theoretically, HSIC attribution method could be used with any kernel $k$ for embedding the perturbations $M_i$. Hence, this proposition is not a condition to enable using HSIC as an attribution method. Proposition 1 only describes the conditions needed to obtain the decomposition property. \n### Link between decomposition property and explainability\nIn order to explain a model prediction, some areas of the image may only be important in interaction with other areas, affecting the output only when both areas are perturbed jointly. However, properly assessing interactions is non-trivial. The decomposition property allows to simply and correctly assess interactions. It is illustrated in an openreview general comment above, **\"Motivating example for orthogonal decomposition property\"**. The described experiment illustrates why Eq. 5 of the manuscript ***\" does not ensure that we assess the importance of the interactions only\"*** if the decomposition property does not hold. Indeed, in that case, the conditions to be able to use Eq. 5 are not fulfilled, so one cannot use it to assess the importance of the patch-wise interactions.\n## 2. Pixel-wise aspect of explanation maps\nIf we choose a grid of size $7\\times 7$ , so will be the dimension of the HSIC heatmap. To be able to overlap it with original images, we need to upsample them. The pixel-wise aspect simply comes from using a bilinear upsampling, as in [1,2]. We include this comment in the revised version.\n\n[1] Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis, T. Fel, R. Cadene, M. Chalvidal, M. Cord, D. Vigouroux, and T. Serre, NeurIPS 2021\n\n[2] Rise: Randomized input sampling for explanation of black-box models. V. Petsiuk, A. Das, and K. Saenko. In Proceedings of the British Machine Vision Conference (BMVC), 2018.", " We would like to thank the reviewers for their comments and questions. They recognized the relevance and the novelty of the proposed method. The main concerns of the reviewers were about the clarity of some aspects of our work and some technical details that we address in individual responses. We modify the paper accordingly and highlight modifications in the revised manuscript in blue.\n\nBefore going on with the individual responses, we would like to comment on the orthogonal decomposition property since all reviewers shared concerns about its clarity. First, a detailed description of the decomposition property can be found in Appendix A. Second, to further clarify that point, and as suggested by Reviewer **Zkzj**, we introduce a motivating example that aims at:\n* describing a case where **interactions are important to explain a model's output**,\n* showing why **the orthogonal decomposition property is necessary to assess the interactions properly**.\n\nWe include this example in a second general response entitled **\"Motivating example for orthogonal decomposition property\"**. We also add it to Appendix A. Upon acceptance, we will include a more detailed discussion about the decomposition property and the interactions in the main document.", " Let us demonstrate the importance of interactions and the decomposition property with a simple example (similar to an XOR toy example, as suggested by Reviewer **Zkzj**). Let $f: [0,2]^3 \\rightarrow \\{0,1\\}$ such that \n$$\n \\\\textnormal{y} = f(x_1, x_2, x_3) = \n \\\\begin{dcases}\n 1 & \\text{if } x_1 \\in [0,1], x_2 \\in [1,2], x_3 \\in [0,1],\\\\\\\\\n 1 & \\text{if } x_1 \\in [0,1], x_2 \\in [0,1], x_3 \\in [1,2],\\\\\\\\\n 0 & \\text{otherwise. } \n \\\\end{dcases}\n$$\nThe function $f$ is illustrated in Appendix A. Here, $x_i$ is analogous to $M_i$. In that case, it is clear that $x_1$ is important to explain the output. However, assessing the effect of $x_2$ and $x_3$ is more difficult. They are clearly important to explain the output $\\textnormal{y}$, but it can be shown theoretically that $HSIC(x_2, \\textnormal{y})=0$ and $HSIC(x_3, \\textnormal{y})=0$. **This motivates to assess the interactions between input variables**. One way to retrieve the information that $x_2$ and $x_3$ are important is to assess $HSIC( x_{2,3} , \\textnormal{y})$, where $x_{2,3} = (x_2,x_3)$\n\nAs noticed by Reviewer **eNpV**, one could assess $HSIC( x_{2,3} , \\textnormal{y})$ without any constraints on the kernel $k$, and the obtained value for $HSIC( x_{2,3} , \\textnormal{y})$ would indeed be $>0$ . However, by doing so, we would also obtain that $HSIC( x_{1,2} , \\textnormal{y}) > 0$ and $HSIC( x_{1,3} , \\textnormal{y}) > 0$, whereas $x_1$ does not interact with $x_2$ and $x_3$, only because of the individual effect of $x_1$. We empirically illustrate this by assessing these metrics using the estimator of Eq. 2 with $p=10000$, and kernels $k$, $l$ chosen as the Radial Basis Function (RBF). The results are found in Table 1 below and show that:\n* $HSIC(x_2, \\textnormal{y}) \\approx HSIC(x_3, \\textnormal{y}) \\approx 0$\n* $HSIC( x_{1,2} , \\textnormal{y}) \\approx HSIC( x_{1,3} , \\textnormal{y}) > HSIC( x_{2,3}, \\textnormal{y})$\n\n| $HSIC(x_1, \\textnormal{y})$ | $HSIC(x_2, \\textnormal{y})$ | $HSIC(x_2, \\textnormal{y})$ | $HSIC( x_{1,3} , \\textnormal{y})$ | $HSIC( x_{1,2} , \\textnormal{y})$ | $HSIC( x_{2,3} , \\textnormal{y})$ | \n| - | - | - | - | - | - | \n| $1.79 \\times 10^{-2}$ | $2.28 \\times 10^{-6}$ | $9.63 \\times 10^{-6}$ | $1.36 \\times 10^{-2}$ | $1.36 \\times 10^{-2}$ | $2.92 \\times 10^{-3}$ | \n**Table 1**: HSIC metrics with $k$ taken as RBF\n\nIn order to correctly assess the pairwise interactions of input variables $x_1$ and $x_2$, one has to remove the individual effect of each variable from the $HSIC( x_{1,2} , \\textnormal{y})$. The orthogonal decomposition property [1] allows to do so by simply computing $HSIC_{inter}( x_{1,2} , \\textnormal{y})$ as\n\n$HSIC_{inter}( x_{1,2} , \\textnormal{y}) = HSIC( x_{1,2} , \\textnormal{y}) - HSIC(x_1, \\textnormal{y}) - HSIC(x_2, \\textnormal{y})$\n\n**If the decomposition property does not hold, we are not guaranteed to fully remove the individual effect of $x_1$ and $x_2$** **using the previous formula**. We estimate $HSIC_{inter}( x_{1,2} , \\textnormal{y})$ when the kernel $k$ satisfies the decomposition property (in that case, we choose a Sobolev kernel as in [1]), and when it does not, and show that the correct information of $HSIC_{inter}( x_{1,2} , \\textnormal{y})$ is only retrieved when the decomposition property is satisfied. As previously, this is illustrated in the experiment, whose results are found in Table 2. \n\n| | $HSIC_{inter}( x_{1,2} , \\textnormal{y})$ | $HSIC_{inter}( x_{1,3} , \\textnormal{y})$ | $HSIC_{inter}( x_{2,3} , \\textnormal{y})$ |\n| ----------- | ------------------------------------------------------ | ------------------------------------------------------ | ------------------------------------------------------ |\n| $k$ Sobolev | $7.68 \\times 10^{-6}$ | $2.83 \\times 10^{-6}$ | $7.85 \\times 10^{-4}$ |\n| $k$ RBF | $-4.35 \\times 10^{-3}$ | $-4.30 \\times 10^{-3}$ | $2.91 \\times 10^{-3}$ |\n**Table 2**: HSIC metrics for assessing interactions, when $k$ satisfies (Sobolev) / does not satisfy (RBF) the orthogonal decomposition property\n\n\nIn that case, with $k$ satisfying the orthogonal decomposition property (Sobolev), we retrieve that $HSIC_{inter}( x_{1,2} , \\textnormal{y}) \\approx HSIC_{inter}( x_{1,3} , \\textnormal{y}) \\approx 0$ and $HSIC_{inter}( x_{2,3} , \\textnormal{y})$ is significant. When $k$ does not satisfy the property (RBF), the values are not relevant (a negative value has no meaning since the metric is a distance)\n\n[1] Kernel-based anova decomposition and Shapley effects–application to global sensitivity analysis, Da Veiga, 2021\n", " Summary\nThe paper is motivated to improve the explainability of vision inference results. Specifically, they propose to measure the dependence between certain perturbation of regions of the input images and the actual labels in RKHSs. This helps to understand how much independent $M_i$ is from $y$, so that the approach may determine how important each perturbed fraction of the image are associated with the final inference results. The approach then is able to locate the pivotal input parts that most possible has led to the final inference. Empirical results validate that, in certain scenarios, the proposed method provides better explanation in comparison to current SOTA methods. \n * Strength:\n\nThe method proposes to introduce the HSIC metric in order to detect the dependence between certain fraction of the input and the actual label. The approach correspondingly gives analysis in why measuring such dependence help better explain the inference result through joint interactions among different patches, a perspective that other SOTA methods haven’t examined. Empirical results show that the proposed method can better locate the essential parts of the image. \n\n* Weakness:\n\nI am not quite sure how \"Proposition 1\", i.e., the decomposition property is related to the actual explainability of the proposed method. Is it a condition to enable the incorporation of HSIC metric in this specific scenario? It seems to me there is some gap here between the analysis and the actual motivation to improve explanation. In other words, why “it does not ensure that we assess the importance of the interactions only (line 199-200)” if the decomposition property does not hold? And how this links to the eventual inference?\n\nThe description of the algorithm could be further clarified. The current algorithm flow is a bit vague in how we can obtain the eventual explanation and to diagnose the model. Take for example, how do we link to the eventual pixel level highlights of the image parts (explanation like image Figure 3) through the construction of Eq.(2)?\n\n\n* Minor issues: \n1. There are some presentation issues in technical details:\nEq. (2), the brackets of the trace operation are missing. \n\n2. There are some minor grammar issues that could be further improved. Here are some examples. \n Line 141-142 This sentence has grammar issues and needs to be rephrased; \n Line 177 Let ${x_1…X_n} $ a set of -> Let ${x_1…X_n}$ be a set of;\n Line 190 Let a Bernoulli Variable -> Let x be a Bernoulli variable;
 1. How the decomposition property is related to the actual explainability of the proposed method (See details above in the weakness).\n2. How the algorithm flow is connected to the eventual implementation of pixelwise explanation of the input image (See comments above regarding the algorithm flow)? Yes, the authors have discussed the limitations. ", " The authors present a black-box attribution method based on Hilbert-Schmidt Independence Criterion. By leveraging representation capabilities of RKHS, this method provides a dependence measure to capture more diverse information than traditional variance-based indices and does so more efficiently. This method randomly perturbs an input image in multiple regions at a time (similar to LIME, RISE, and Sobol) to identify higher-order interactions between patches. They demonstrate it on various applications including image classification and object detection tasks. The authors quantify the efficacy of explanations using Deletion, Insertion, and uFidelity. The results show that their method is largely better than other attribution methods across a variety of neural networks, old and new. Strengths\n- good job of placing research in broader field\n- the theory is well founded and the implementation algorithm is clear\n- the experiments and results are clear\n- thorough exploration across popular attribution methods and across various architectures with better performance and faster execution times\n\nWeakness\n- The spatial interactions in the model section is anecdotal. - The difference in explanations between Hi and Hixj is surprising that they are different. Shouldn't attributions from each image patch overlap with the attributions from interactions? Second-order effects are usually smaller. \n- Is there an XOR situation that could highlight that Hixj is able to uncover unique interactions that would be missed by attributions of individual patches? \n- Is there a way to quantify expected interactions? The challenge is that quantitative comparisons is difficult without ground truth. The Deletion, Insertion and uFidelity metrics provide a quantitative way to compare methods but they are all flawed. Moreover, these explanations provided by these methods are quite limited as regions are highlighted. It is not clear what about those regions, i.e. edges, textures, shapes, are important. A section on limitations of attribution maps should be provided as the language term explanation used in this paper is a bit loose. Notwithstanding, it has the makings of a valuable contribution for attribution-based interpretations with clear benefits compared to existing methods. ", " The paper presents a new method for obtaining saliency maps that can explain a deep image classifier's predictions. The method is \"black box\" in the sense that it only relies on the ability to evaluate a model's predictions (obtain the predicted label \"y\" for any desired input \"x\"). It does not require any knowledge or access of the internals of the prediction function (e.g. the ability to compute gradients) as \"white box\" methods would.\n\nThe proposed method uses the Hilbert-Schmidt Independence Criterion (HSIC).\nIt assumes the selection of two kernel functions: \"k\" defines the similarity between two images (x and x') and \"l\" defines the similarity between two labels (y and y'). These kernels have associated feature embeddings. \n\nGiven p possible masks that perturb an original image x, and the associated outputs, the HSIC concretely computes (see Eq 2) the norm of the cross-covariance between the embeddings of perturbed images and embeddings of their predictions. The scalar quantity HSIC can be computed for each of the patches in an image.\n\nThey further describe how *interactions* between two patches in the same image can be given an HSIC score (Eq 5), which allows higher-order reasoning about importance (e.g. in an image of a face, perhaps each eye individually is not so important but the combination of the two eyes is). \n\nOverall, the claimed contributions of the proposed approach are:\n- 1) a new method based on HSIC\n- 2) a derivation that shows an orthogonal decomposition property exists for a specific kernel\n- 3) experiments showing the method is useful on ImageNet in terms of quality and speed\n- 4) demonstration of versatility on object detection and in pairwise patch interactions\n \nStrengths\n---------\n+ Speed ups of 5-8x over previous black-box attribution methods (Sobol and RISE) seem useful\n+ The RKHS foundations of the proposed methods are technically interesting\n+ Focus on pairwise interactions between patches is useful\n+ Experiments compare to a variety of \"state of the art\" white and black box methods\n\nWeaknesses\n----------\n- W1: Technical communication could be better, some key details are difficult to parse [see below]\n- W2: Orthogonal decomposition property in particular is not well explained or justified\n- W3: Qualitative visual examples are only shown for the presented method; difficult to assess relative value over baselines\n- W4: Some of the visual explanations are puzzling: Why is a Puma's mouth more important than eyes? Why is only the number 11 important on the clock?\n\n\n### W1: Technical communication issues\n\nIn Sec 3.1, there are several issues:\n\n1) the definition of \"n\" versus \"d\" is not clearly established.\nIt is not clear how the elementwise product between x (a vector of size n) and M (a vector of size d) is well-defined. \nIt is not clear if \"patches\" or individual pixels are used.\n\n2) the way that \"baseline\" value \\mu is defined is not clear\nIs this value the same for every pixel in image? How is it defined? is it learned?\n\nIn Sec 3.2, further issues:\n\n3) It is difficult to do a dimensionality analysis of Eq 2 \nLooking at Eq 2, the trace of a product of pxp matrices should be a scalar.\nBut surely the HSIC is computed for each pixel/patch in the image, right?\nThis equation's presentation should be modified to more clearly indicate how you get a scalar for each pixel/patch of the saliency \"heatmap\" \n\n\n4) Using symbol \"x\" here as a general random variable collides with readers' previous understanding of x as the input image. \nI think you want to distinguish the two.... there isn't a distribution over the input image, instead there's a distribution over *perturbations* of this image.\nFor example, in Eq 4 you need x to be discrete, but this collides with reader's already formed assumption that x is a pixel.\n\n### W2: Orthogonal decomposition is not clear\n\nI understand at a high level that assessing the interaction between two patches in the image is important and it is not additive. But the technical details of the proposed \"orthogonal decomposition\" were not clear to me.\n\nLines 176-182 were particularly hard to read for me.\nWhy is the \"sum of indices\" equal to one?\nWhy does this imply something beneficial? \n### Q1: What was done to ensure fair comparison of timings in Table 1?\n\nHow many perturbations p are Sobol or RISE using? How was this selected?\nShouldn't an \"efficient\" and \"accurate\" version of both methods be also possible?\n\n\n### Q2: In Table 2, why use 5000 samples for D-RISE but fewer for your method?\n\nAgain, seems like this choice will impact the speed of baselines.\nHow can we be sure this choice is fair?\n\n### Q3: What does it mean for a method to score better on Insertion than Deletion?\n\nRISE is notably better on that Insertion metric than others. \nWhat's the intuition here? Are there ways to visually understand why that is?\n\n\nMinor Presentation Issues\n-------------------------\nPlease adjust for revision, but no need to discuss in rebuttal \n\n- Lines 25-28 read awkwardly, consider revising for flow\n- Fig 1 gives impression that x has a distribution, but the distribution is over binary mask, right?\n- Lines 46-49: A distribution over what exactly? Over perturbed values? Over pixels observed in a region/patch?\n- Lines 123-126: Why use the symbol x twice (italicized and not)? Please redefine\n- Line 171: \"networkss\" typo Limitations\n-----------\nThere are likely a number of choices that could be very sensitive to overall performance that are under-explored here:\n- number of patches used / size of the 'grid'\n- choice of the kernel functions (both \"k\" and \"l\")\n- choice of the lengthscale of the RBF kernel (maybe median is OK, but could you improve performance by optimizing?)\n\nAs a blackbox method, there may be cases where white box methods are possible and preferred, but this is adequately addressed by the paper.\n\nThe discussion of negative societal impact was a bit light, though unobjectionable (I agree that for a new technical method like this, it will share the same possible for misuse as related methods like Sobol or RISE). ", " After rebuttal: The authors answered my questions. The ideas are interesting, but the improvement of the state-of-the-art is perhaps not that significant and the presentation could be improved (which the authors can probably do if accepted). I increased my rating to weak accept.\n\n---\n\n\nThe authors propose a new attribution method based on the Hilbert-Schmidt Independence Criterion. The paper explains the technical background, defines and motivates the attribution method and proposes a simple sampling algorithm to approximate it. The authors also discuss how the new method can be better suited for capturing the joint effects of features. In the experiments, the paper evaluates 'fidelity', 'efficiency' and finding spatial interactions. The idea seems novel and is relevant for NeurIPS. The main motivation for the particular choice of the attribution method seems to be that it features a decomposition property that allows separting the joint effect of features from their individual effects. While this is theoretically interesting, it could be discussed and evaluated in more detail if it gives much better explanations than simply considering pairs of features that could then be evaluated by a standard method. To get an 'efficient method' the authors use a sampling algorithm. Previous results from the literature seem to suggest that the estimator applied here needs fewer samples. However, the discussion is somewhat disconnected from the actual algorithm and it would be good to be more explicit about which guarantees (please refer to the theorem number) from which paper can be applied here and why. \n\nThe 'fidelity' evaluation is based on the measures Deletion, Insertion and muFidelity. They are explained in the appendix, which should be mentioned in the main part (I almost missed it). I suppose that Insertion is based on AUC symmetrical to Deletion, but it would be good to make this explicit). Table 1 shows Deletion and Insertion results, but muFidelity seems to be missing. The proposed approach works best with respect to Deletion, but not with respect to Insertion, which is dominated by RISE. This indicates that there could be a tradeoff between Insetion and Deletion similar to Precision and Recall. Perhaps it's just a coincidence, but it would be good to discuss why the proposed method does better with respect to one measure, but not with respect to the other. I am also missing muFidelity in the table. The runtime experiment is probably supposed to illustrate the convergence speed. While the graph intuitively indicates faster convergence, it should be discussed in more detail what it actually shows and why the experiment has been set up in this way. In particular, what is the \"baseline asymptotical explanation\"? Interactions are investigated by means of examples. The picked examples are intuitive. While such an evaluation by example seems always a bit weak, it does make sense that the joint effect of features can be more important than their individual effects.\n\nOverall, the paper does not contain ground-breaking results, but shows improvements in some areas (Deletion, Runtime). There are some interesting ideas in the paper, but they should be discussed and evaluated in more detail in my opinion (e.g. feature interaction, runtime and approximation error guarantees). The code is available for reproducibility, although I did not have time to check it. At the moment, I consider this as a borderline paper.\n\nSome comments\n\n- I find the black-box/white-box distinction in this paper confusing. White-box sometimes refers to a model that is 'interpretable', while black-box refers to one that is not. Here, white-box just means that the model can be accessed, while black-box means that it cannot. Why not simply call it available/unavailable to avoid confusion? I am also not convinced that availability is really an issue in many cases. Of course, an end-user does not have access to the model, but in many applications, the explanation module should probably be part of the user interface. I may be wrong, and some concrete examples could convince me otherwise. \n\n- Parts of the paper are very vague and convoluted. For example, lines 37-41 would be perhaps be easier to understand when just writing that perturbation methods rely on sampling and evaluating a sample can be expensive for recent neural network architectures that are growing larger.\n\n- line 31: the paper refers to [18] for a statement about gradients, but the paper does not really seem to discuss the issue. Please clarify or delete.\n The main points from my review above that could change my opinion are the following (please correct me if I am wrong):\n\n1) The main motivation for the particular choice of the attribution method seems to be that it features a decomposition property that allows separting the joint effect of features from their individual effects. While this is theoretically interesting, it could be discussed and evaluated in more detail if it gives much better explanations than simply considering pairs of features that could then be evaluated by a standard method. \n\n2) Previous results from the literature seem to suggest that the estimator applied here needs fewer samples. However, the discussion is somewhat disconnected from the actual algorithm and it would be good to be more explicit about which guarantees (please refer to the theorem number) from which paper can be applied here and why. \n\n3a) Table 1 shows Deletion and Insertion results, but muFidelity seems to be missing. The proposed approach works best with respect to Deletion, but not with respect to Insertion, which is dominated by RISE. This indicates that there could be a tradeoff between Insetion and Deletion similar to Precision and Recall. Perhaps it's just a coincidence, but it would be good to discuss why the proposed method does better with respect to one measure, but not with respect to the other. \n\n3b) I am also missing muFidelity in the table.\n\n4) The runtime experiment is probably supposed to illustrate the convergence speed. While the graph intuitively indicates faster convergence, it should be discussed in more detail what it actually shows and why the experiment has been set up in this way. In particular, what is the \"baseline asymptotical explanation\"?\n Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 3, 3 ]
[ "I9utDw2FS9", "J4uRUY0SjSF", "FmT8peNmHsm", "e4fUNmDOHgq", "z-bNZzewpa", "MxOtw2o5BfL", "z-bNZzewpa", "jg052CJhnFs", "J4uRUY0SjSF", "n47yNBtYXI", "nips_2022_Vt3_mJNrjt", "nips_2022_Vt3_mJNrjt", "nips_2022_Vt3_mJNrjt", "nips_2022_Vt3_mJNrjt", "nips_2022_Vt3_mJNrjt", "nips_2022_Vt3_mJNrjt" ]
nips_2022_-ZQOx6yaVa-
Causally motivated multi-shortcut identification and removal
For predictive models to provide reliable guidance in decision making processes, they are often required to be accurate and robust to distribution shifts. Shortcut learning--where a model relies on spurious correlations or shortcuts to predict the target label--undermines the robustness property, leading to models with poor out-of-distribution accuracy despite good in-distribution performance. Existing work on shortcut learning either assumes that the set of possible shortcuts is known a priori or is discoverable using interpretability methods such as saliency maps, which might not always be true. Instead, we propose a two step approach to (1) efficiently identify relevant shortcuts, and (2) leverage the identified shortcuts to build models that are robust to distribution shifts. Our approach relies on having access to a (possibly) high dimensional set of auxiliary labels at training time, some of which correspond to possible shortcuts. We show both theoretically and empirically that our approach is able to identify a sufficient set of shortcuts leading to more efficient predictors in finite samples.
Accept
The reviewers agreed the paper is a worthwhile contribution in a growing area of identifying and removing shortcuts for robustness to distributional shift. Please take the reviewers feedback into consideration for the camera-ready.
train
[ "RhQH0NHpf6", "vlHRHZOMPLX", "q2CsJhtSGx", "rh7ErxHPYmq", "-u-Has5CCNq", "Z7M2oK4SqEg", "ydgW2jvEn_Ey", "BJ2_RP29og-", "K8P7g_BNjTM", "zXsui6UaDBH" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comment. \nGroup DRO is equivalent to our method when the loss is convex, but this equivalent might break down when they are non-convex. \nThis is explored in detail in the group DRO paper [1] on page 8 under theoretical comparison. \n\n[1] Sagawa, \"Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization\" ICLR 2020, https://arxiv.org/pdf/1911.08731.pdf\n\n", " Thank you for your response and clarifications! I am satisfied with the response and maintain my score.\n\nCould you please clarify the connection between the baselines you implemented and Group DRO? I think it would be useful to include the camera ready version of the paper as well. Generally, I would recommend including a more detailed description of the baselines in the appendix, as these are not standard methods that the readers will be familiar with.", " I have updated my score.", " We thank the reviewer for their time and thoughtful feedback.\n\n**1. Theoretical or empirical estimate of the impact of having errors in $\\hat{V}^p$:**\n\n(a) _Empirical results on sensitivity to identifying $V^p$_: two of our baselines highlight what happens if $\\hat{V}^p$ is a poor estimate of $V^p$: W-HSIC-FullV shows what happens if $\\hat{V}^p$ includes redundant auxiliary labels, while W-HSIC-HDX shows what happens if $\\hat{V}^p$ excludes important shortcuts. While these baselines are outperformed by our main approach, they still outperform all others despite relying on a partially flawed estimate of $V^p$.\n\n(b) _Theoretical results_: As we state on lines 216-217, our approach for estimating $V^p$ is asymptotically consistent, and hence should converge to the true $V^p$. In addition, results from [6] can be applied to our setting to show that the generalization error bound of our proposed estimator has a fourth order (i.e., mild) dependence on the error in estimating $\\hat{V}^p$. We will add this statement to the main paper.\n\n**2. Indirect empirical evaluation:**\nWe stress that our main goal is estimating a robust predictor. We view identifying the shortcuts as a means to an end, rather than a goal in and of itself. For this reason, we believe that reporting the AUROCs across distribution shifts is our main result. However, we also report our approach’s ability to identify the correct sufficient shortcuts in lines 340-347. Specifically, we report that our approach identifies the correct sufficient shortcuts in all 10 simulations, while an approach that skips our s-reduction step, identifies the correct shortcut in 1/10 of the simulations.\n\n**Both experiments are synthetic** : First, we note that our experiment setup falls in line with existing literature on distribution shifts. Our waterbirds experiment is done on a commonly accepted and widely used robustness benchmark dataset (e.g., see [1-5] among others). Our second experiment on the diabetic retinopathy data reflects a slightly more realistic medical example. Second, we wish to highlight the difficulty with using non-semi-synthetic data. Evaluating the robustness of our approach requires testing it on multiple test sets with varying $P(V^p, Y)$. In real datasets, this can be achieved by undersampling subpopulations in each of the test sets, which leads to varying sample sizes between different test sets. If we observe that a model’s accuracy has high variance on one test set compared to another, it could be due to poor model performance, or it could be due to one test set being smaller than the other, making it hard to draw meaningful conclusions about the model’s distributional robustness.\n\n**Target distribution is consistent with DAG:**\nThe reviewer is correct in that we require the observed data to satisfy our assumptions about the DAG in section 3. This means that some domain knowledge is needed to ensure that the data generating process conforms to our assumptions. However, we believe that such knowledge is readily available for a large number of research questions, e.g., in prediction problems where the task is classifying objects in an image or detecting the presence of a disease from medical imaging. Future work investigating sensitivity to violations in our assumptions is an exciting new direction for research.\n\n**Additional details regarding our statement \"our approach is able to correctly identify the two true auxiliary labels which mark the true shortcuts in all 10 simulations\"**:\nAs we explain in lines 305-306, we simulated the data such that the only 2 true shortcuts are image background and camera artifacts. In addition to those 2 shortcuts, we simulate 10 redundant auxiliary labels that do not mark true shortcuts since they do not affect X. Our approach is able to identify the image background and camera artifacts as the two relevant shortcuts in all 10 simulations.\n\n[1] Sagawa et al. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. ICLR 2020\n\n[2] Liu et al. Just train twice: Improving group robustness without training group information. International Conference on Machine Learning. ICML, 2021\n\n[3] Creager, E., et al. Environment inference for invariant learning. ICML 2021\n\n[4] Sohoni, N., et al No subclass left behind: Fine-grained robustness in coarse-grained classification problems. Neurips 2020.\n\n[5] Makar et al. Causally motivated shortcut removal using auxiliary labels. AIStats 2022.\n\n[6] Foster, Dylan J., and Vasilis Syrgkanis. \"Orthogonal statistical learning.\" 2019\n", " Thank you for your thoughtful review!\n\n**Weakness 1, Complexity of the method**: We will add pseudo code for our algorithm in the appendix, and we will make our code publicly available to help with implementation.\n\n**Weakness 2, Group DRO and Just train twice (JTT) as a baseline:** As we explain in lines 338-339, Group DRO is very closely related to (and in some cases equivalent to) baselines that we have implemented (W-L2-FullV, W-L2-S and W-L2-HDX). All these baselines are outperformed by our approach.\nJTT does not leverage the auxiliary labels. In settings like ours where they are available, not leveraging them is a missed opportunity. The JTT paper itself confirms that, showing that models with access to $V^d$ (specifically, DRO) outperform JTT. Since our approach outperforms methods roughly equivalent to DRO, we have good reason to believe that our approach will outperform JTT.\n\n**Weakness 3, Semi-synthetic experiments:** First, we note that our experiment setup falls in line with existing literature on distribution shifts. Our waterbirds experiment is done on a commonly accepted and widely used robustness benchmark dataset (see [1-5] among others). Our second experiment on the diabetic retinopathy data reflects a slightly more realistic medical example. Second, we wish to highlight the difficulty with using non-semi-synthetic data. Evaluating the robustness of our approach requires testing it on multiple test sets with varying $P(V^p, Y)$. In real datasets, this can be achieved by undersampling subpopulations in each of the test sets, leading to varying sample sizes between different test sets. If we observe that a model’s accuracy has high variance on one test set compared to another, it could be due to poor model performance, or it could be due to one test set being smaller than the other, making it hard to draw meaningful conclusions about the model’s distributional robustness.\n\n**Question 1 and 3b, $P_s(V^p)P_s(Y) << P_s(V^p, Y)$ and overlap**: This notation means absolutely continuous with respect to, we will clarify that. As we state on lines 103-104, it reflects the overlap assumption, which is common in the causal literature. Intuitively, it can be understood as an assumption that $P_s(Y=y, V^p=v^p)$ is non-zero for all $y, v^p$. We will explicitly state that.\n\n**Question 2, $Y$ or $X$ independent to $V^p$**: We thank the reviewer for catching this typo. Line 140 should read $Y$ (not $X$) independent to $V^p$, which is consistent with proposition 1.\n\n**Question 3a**: We thank the reviewer for catching the typo in definition 1, $f$ should be $g$.\n\n**Question 4, similarity to iCARL**: We thank the reviewer for pointing out this related work, we will refer to it in the related works section. The work in iCARL is fundamentally different from ours in that they assume access to multiple datasets collected from different environments. We assume access to a single dataset collected from one environment. In addition, since one of their main goals is identifying causal factors controlling $Y$, they impose assumptions on the functional form of some components of the probability distribution (that they belong to an exponential family). We do not impose any such assumptions, and allow the true distribution to take any form.\n\n**Question 5:** _“Once the shortcuts are identified, could you apply standard methods in the spurious correlation literature…[like] group DRO?_\n\nIn principle, yes. However, the algorithm that the reviewer suggests (identifying the shortcuts then deploying groupDRO) is equivalent to our baseline W-L2-S, and is outperformed by our suggested approach. This is because our suggested approach has an additional HSIC penalty, which reduces the variance of the estimator without introducing bias (as we show in the appendix, proposition A1).\n\n**Question 6:** The strength of the L2 regularization is picked via cross-validation as outlined in the appendix, section C lines 545-546. The fact that our approach does better in distribution is consistent with findings from Makar et al 2022, who show that the MMD penalty leads to improved efficiency compared to the typical L2 penalty even in-distribution. The same conclusion holds for our HSIC penalty, as we highlight in the appendix, proposition A1.\n\nWe will also correct the other typos that the reviewer identified.\n\n[1] Sagawa et al. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. ICLR 2020\n\n[2] Liu et al. Just train twice: Improving group robustness without training group information. International Conference on Machine Learning. ICML, 2021.\n\n[3] Creager, E., et al. Environment inference for invariant learning. ICML 2021\n\n[4] Sohoni, N., et al No subclass left behind: Fine-grained robustness in coarse-grained classification problems. Neurips 2020.\n\n[5] Makar et al. Causally motivated shortcut removal using auxiliary labels. AIStats 2022.\n", " We thank the reviewer for their constructive feedback. \n\n**1- Domain knowledge:** _“My issue...is the ability to collect auxiliary labels that satisfy the conditions on $V^c$”, “a lot of care could still be required in specifying $V^c$”,_ and _“[other papers] do not require exact knowledge of the auxiliary labels”._\n\nWe stress that we do not require exact knowledge of the shortcut labels. We only require access to a set of auxiliary labels ($V^d$) that satisfy the assumptions in section 3, but do not require the user to distinguish between meaningful shortcuts ($V^p$), and redundant features ($V^c$). Proposition 2 establishes that we can distinguish the two subsets without domain knowledge, by conducting 2 sets of independence tests.\n\n**2- Violations to the assumption about auxiliary labels** : _“One can end up collecting collecting auxiliary labels that are children of $X^*$”_, and _“selecting auxiliary labels seems to be as hard as finding spurious features using expert knowledge”_\n\nLike any other ML algorithm, our approach might have suboptimal performance under violations of our assumptions. However, we believe that the domain knowledge required to ensure our assumptions are not violated is readily available for a large number of important research questions. E.g., in prediction problems where the task is classifying objects in an image or detecting the presence of a disease from medical imaging. In this case, almost any auxiliary data that is associated with the image cannot be a causal descendant of the image itself: for example X-ray pixels ($X$) are unlikely to be the causal parent of the auxiliary variables $V^d$ since X-ray pixels cannot cause a person to have pneumonia, be a woman or be a certain age. The case where $X^*$ is a causal parent is even less likely. Intuitively, $X^*$ is a sufficient statistic (i.e., a summary) of the causal effect of $Y$ on the appearance of the image $X$, for example opacity in the appearance of the lungs in case of detecting pneumonia. Changes in the appearance of an image caused by the target label are unlikely to be the causal parent of an auxiliary label. Overall, we agree with the reviewer that our work represents important first steps, and that future work should focus on developing sensitivity analysis for settings where our assumptions are violated. We will state that in the paper.\n\n**3- Conditional independence testing vs. KCIT**: _“Conditional independence tests are supposed to be hard when things get high dimensional”_.\n\nThe reviewer is correct. This is exactly why we propose conducting the independence tests on $s(X)$ which is a lower dimensional projection of $X$. We chose KCIT for ease of implementation since it is relatively well known among the ML community, and because of its strong theoretical guarantees (asymptotically consistent). Other independence tests might be suitable, and would be interesting to explore in future work, we will state that.\n\n**4 - If $V^d$ is high dimensional, will $s(x)$ be high dimensional?**: Not necessarily. As outlined in line 191-197, we first remove the subset $V^d$ that satisfies the condition $Y \\perp_{P_s} V^i \\mid V^{i\\/d}$. For the remaining variable set, which we denote by $\\underline{d}$, we train $s(x) \\rightarrow V^\\underline{d}$. While $V^\\underline{d}$ likely has a smaller dimension than $V^d$, it might still be high dimensional, making the output from $s(x)$ high dimensional. However, we note that there is a vast literature on high dimensional classification which shows that the task of learning $s(x)$ has a mild dependency (up to logarithmic) on the size of $\\underline{d}$ [1], which means that $s(x)$ can be learned efficiently from finite (small) samples. We did not find practical challenges with this step.\nNote: we will fix the typo in the definition of s(x) on line 196, it should read $s(x) : X → V^\\underline{d}$ not $s(x) : X → V^d$.\n\n**5- Related work**: We thank the reviewer for pointing out additional related work, we will include these papers in the related works section.\n\n[1] Lei, Yunwen, et al. \"Generalization error bounds for extreme multi-class classification.\" CoRR, abs/1706.09814 (2017).\n\n", " We thank the reviewers for their thoughtful feedback. We are encouraged that they recognize the importance of our contribution (Reviewers 4zx1 and 3Y9R), the clarity of our presentation (Reviewers 4zx1 and 3Y9R) and the strength of our experimental findings (Reviewers 4zx1 and 3Y9R).\n\nTwo of the reviewers asked about the performance of our approach in real data settings.\nFirst, we note that our experiment setup falls in line with existing literature on distribution shifts. Our waterbirds experiment is done on a commonly accepted and widely used robustness benchmark dataset (e.g., see [1-5] among others). Our second experiment on the diabetic retinopathy data reflects a slightly more realistic medical example. Second, we wish to highlight the difficulty with using non-semi-synthetic data. Evaluating the robustness of our approach requires testing it on multiple test sets with varying $P(V^p, Y)$. In real datasets, this can be achieved by undersampling subpopulations in each of the test sets, which leads to varying sample sizes between different test sets. If we observe that a model’s accuracy has high variance on one test set compared to another, it could be due to poor model performance, or it could be due to one test set being smaller than the other, making it hard to draw meaningful conclusions about the model’s distributional robustness.\n\nIn addition, two of the reviewers asked about how our method relates to other work. We thank the reviewers for pointing us to relevant literature. We recognize that with such a fast growing topic, it is difficult to include a comprehensive literature review. However, we believe including the suggested citations will make our paper stronger.\n\nWe respond to additional questions by each reviewer separately below.\n\n\n[1] Sagawa et al. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. ICLR 2020\n\n[2] Liu et al. Just train twice: Improving group robustness without training group information. International Conference on Machine Learning. ICML, 2021\n\n[3] Creager, E., et al. Environment inference for invariant learning. ICML 2021\n\n[4] Sohoni, N., et al No subclass left behind: Fine-grained robustness in coarse-grained classification problems. Neurips 2020\n\n[5] Makar et al. Causally motivated shortcut removal using auxiliary labels. AIStats 2022\n", " The paper uses auxiliary labels to detect and mitigate shortcuts. The key idea is that they are able to go from a large set of auxiliary labels to a potentially small set that are actually shortcut labels. The most of the representations building follows from Makar et al. 2022.\n\n\n\n\n The paper is easy to read and makes an important contribution to an important problem. They take clean and useful steps to go from having auxiliary labels that are not necessarily shortcut labels to get to a shortcut labels by using conditional independence tests. This seems to suggest that we can pick up a bunch of labels about the task and use them as V^d. The experiments in the paper are encouraging. Lines 105-113 talking about a specific important problem were useful and informative!\n\n\nMy main issue with this paper is the ability to collect auxiliary labels that satisfy the conditions of $V_c$ in figure 1. My issue is that one can end up collecting auxiliary variables that are causal children of $X^*$. This would mean that s(X) is dependent on these always. This means these potentially invariant and informative features (because they appear in $X^*$) would end up being listed in $\\hat{V}_p$. Then, under means going to $P_0$ will make Y independent on some invariantly relevant features. This would lead to suboptimal error. \n\nThis means that a lot of care could still be required in specifying $V_c$, which would require domain knowledge or some knowledge of shortcuts. That said, I do believe the paper points out a useful next step to do when one is able to get a list $V_d$ that does not specify any $X^*$ related features.\n\nI have other issues with the claim of the ability to handle high-dimensional $V_d$. See questions.\n\n\nThere's a few papers that have related causal/anticausal factorizations and learning that could maybe go in the related work because they also handle high-dimensional shortcuts or do not require exact knowledge of the shortcut features: causal sufficiency based representation learning https://arxiv.org/abs/2109.03795, equation 1 seems to like the family here and high-dimensional spurious features https://arxiv.org/abs/2107.00520, the causal diagram is used along with environments but not spurious features https://arxiv.org/abs/2006.07500, use augmentations to build robust models https://arxiv.org/abs/2111.12525\n\n\n===== Update after rebuttal\n\nThe authors have addressed my concerns. I'm updating my score accordingly. 1. Selecting the auxiliary labels seems to be as hard as finding spurious features using expert knowledge. How can I pick auxiliary variables without knowledge?\n2. While V^d is allowed to be highdimensional, and then try to find a subset by testing each V separately. Conditional independence testing is supposed to be hard when things get high-dimensional (queue the advent of CRTs https://arxiv.org/abs/2104.10618 etc.). Can the authors comment on if and why they believe this may not be an issue? Is KCIT particularly suited here?\n\nrelated issue: \n\nFinding s(X) in equation 2 seems to require building a model with a high-dimensional output (if V^d is allowed to be high-dimensional). Was this easy in practice? How large can V^d get before this step seems a large error?\n\n3. Can the authors address the main concern from above? Giving an example would be greeat here. See questions and weaknesses.", " The paper proposes a method for (1) idetifying shortcuts among a set of candidate shortcuts and (2) training a predictor that is invariant to these shortcuts. The method is based on assumptions about the causal relationships between the target variables $Y$, observed inputs $X$ and the candidate spurious attributes $V^{d}$, and comes with theoretical guarantees. The authors perform experiments on two semi-synthetic vision datasets. I want to note that I am not an expert on causal inference, so I did not verify all the details of the mathematical derivations.\n\n**Strength 1.** The paper is well-written and the presentation is clear. In particular, Section 3 is written in a way that it can be understood by non-expert readers, which I appreciate: the authors provide examples associated with different types of causal structures.\n\n**Strength 2.** The proposed method tackles a challenging setting where the list of shortcuts is not known a priori, and has to be identified from a broader range of candidate shortcuts.\n\n**Strength 3.** The experiments show that all the components of the proposed method are actually useful, and the method cannot be trivially simplified.\n\n**Strength 4.** The authors describe the details of the proposed procedure carefully, including how the cross-validation is performed.\n\n**Weakness 1.** The proposed method is quite complicated and involves a lot of moving parts. As far as I understand, you need to (1) train a predictor $s(x)$ that predicts the attributes $V^d$ from $X$; (2) train a predictor $\\eta(v^p_i, y_i)$ in order to obtain example weights to achieve independence between $Y$ and $V^p$; (3) train a feature representation $\\phi(X)$ and classifier $h(x)$; (4) perform multiple independence tests during shortcut identification, feature representation learning and validation. Given the complexity of the procedure, it is likely that it may not work out of the box on a new problem. \n\n**Weakness 2.** Experiments only compare the variations of the proposed method, standard training with weight decay and group weighting. I understand that the paper tackles a unique setting, but it would be good to include other baselines such as Group DRO (given that shortcuts are known) or methods such as Just Train Twice, even if the comparison is not entirely fair. Group DRO could possibly provide an upper bound on the robust classifier performance with identified shortcuts.\n\n**Weakness 3.** Experiments are all done on semi-synthetic problems. Both problems use artificially added black patches as one of the shortcuts. It would be nice to see a more realistic application of the proposed method.\n\n**Typos**:\n- line 7: interprability \n- line 92: are are\n- line 166: two tests that enables\n- line 170: in principal\n- line 281: the that\n **Question 1.** In line 104, could you please clarify what you mean by $P_s(V^6) P_s(Y) \\ll P_s(V^p, Y)$? Does the $\\ll$ here mean \"absolutely continuous with respect to\" or \"much lower\"? In the latter case, one probability measure cannot be much lower than another one everywhere, as they have to both integrate to $1$?\n\n**Question 2.** In proposition 1 you say that you require $Y$ to be independent from $V^p$ but in the interpretation in line 140 you say $X$ independent from $V^p$. Is this a typo?\n\n**Question 3.** In definition 1, what is $f$? Should it be $g'$?\n\n**Question 3.** You mention \"overlap\" several times in the paper, e.g. in line 103 and line 148. What exactly do you mean by overlap? \n\n**Question 4.** Could you please comment on the similarity of your work to iCaRL [1]? \n\n**Question 5.** Once the shortcuts are identified, do I understand correctly that you could apply some of the standard methods in the spurious correlation literature to learn a robust predictor? For example, Group DRO?\n\n**Question 6.** How did you select the strength of $L_2$ regularization for the $L2$ method? It appears that it performs poorly even in-distribution on Waterbirds.\n\n[1] *Invariant Causal Representation Learning for Out-of-Distribution Generalization*;\nChaochao Lu, Yuhuai Wu, José Miguel Hernández-Lobato, Bernhard Schölkopf No issues.", " This paper aims to deal with the shortcut learning to alleviate distribution shift problem by leveraging the causal structure of a problem. The paper first defines several models and concepts including the casual DAGs, generalization risk, P-specific shortcuts, etc. Then, it identifies a sufficient subset of shortcuts with the help of two properties stated in section 4. Afterward, it builds risk invariant predictors on the given identified set. Experiments on two datasets demonstrate the effectiveness of their method. \n strength:\n(1) The paper provides an idea that robustness to a large set of distribution shifts can be achieved by ensuring the invariance to a small set of shortcuts.\n(2) The paper further develops a method to identify the small set of shortcuts. \n\nweakness:\n(1) The author proposed a two-step method to build risk invariant predictors, however there lacks statements on the impact of the accuracy of step 1 on step 2. Such as, in section 3, the author assume that Vp is a subset of Vd. How will it affects the robustness of the predictors when it is violated? Since this method is based on a group of complex assumptions, I believe that such results is needed(no matter theoretical or experimental).\n\n(2) The experimental results are indirect to demonstrate the contribution claimed by the authors. The authors claim that their methods are able to identify shortcuts, while in both experiments there lacks an actual result to illustrate it, and merely overall AUROCs on different settings are offered.\n(3) It seems that both experiments are synthetic, so it is a little not convincing. (1) In equation(1), it seems the target set of distributions is calculated from a specific DAG. Whether the arguments and results below relies on a correct DAG?\n(2) In section 6.1, it says \"our approach is able to correctly identify the two true auxiliary labels which mark the true shortcuts in all 10 simulations\". Is there any detailed result? Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "vlHRHZOMPLX", "-u-Has5CCNq", "Z7M2oK4SqEg", "zXsui6UaDBH", "K8P7g_BNjTM", "BJ2_RP29og-", "nips_2022_-ZQOx6yaVa-", "nips_2022_-ZQOx6yaVa-", "nips_2022_-ZQOx6yaVa-", "nips_2022_-ZQOx6yaVa-" ]
nips_2022_jtq4KwZ9_n9
Geometry-aware Two-scale PIFu Representation for Human Reconstruction
Although PIFu-based 3D human reconstruction methods are popular, the quality of recovered details is still unsatisfactory. In a sparse (e.g., 3 RGBD sensors) capture setting, the depth noise is typically amplified in the PIFu representation, resulting in flat facial surfaces and geometry-fallible bodies. In this paper, we propose a novel geometry-aware two-scale PIFu for 3D human reconstruction from sparse, noisy inputs. Our key idea is to exploit the complementary properties of depth denoising and 3D reconstruction, for learning a two-scale PIFu representation to reconstruct high-frequency facial details and consistent bodies separately. To this end, we first formulate depth denoising and 3D reconstruction as a multi-task learning problem. The depth denoising process enriches the local geometry information of the reconstruction features, while the reconstruction process enhances depth denoising with global topology information. We then propose to learn the two-scale PIFu representation using two MLPs based on the denoised depth and geometry-aware features. Extensive experiments demonstrate the effectiveness of our approach in reconstructing facial details and bodies of different poses and its superiority over state-of-the-art methods.
Accept
All reviewers were in favor of acceptance. The AC examined the paper, reviews, and author response, and is inclined to agree. The AC would encourage the authors to incorporate their responses to the reviewers into the final version of the paper.
test
[ "q5sG9pVOk17", "tNGM84IVOsi", "lXL0ttKsIYx", "RaiqQns4bv", "9N7OAmGcVLS", "--s78wkZRyh", "PoWLCyeTCYH", "PCRog7lu3-b", "kibxePx4bYu", "kof0enkL-bg", "9DUknTGnLDN", "kq7o7Eop1v", "GlE7Ts8TK_E", "jvKW01tNPXr", "9GRJBmwdyG" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed response. The rebuttal addressed my concerns. I’m leaning towards acceptance.", " Thanks for the interesting questions. Our two-scale PIFu representation has the following two working prerequisites. First, the independently modeled regions (e.g., the face regions) are salient and easy to segment. Second, for these regions, there are high-quality datasets (e.g., FaceScape [89] dataset we used for modeling face region) available for the independent pre-training. \n\n**Q1: For instance, would modeling the hair/scalp separately also prove beneficial?**\n\nYes, modeling the hair/scalp separately is beneficial. We observe that the depth noise of the Kinect camera is large (sometimes the depth value is missing) in the hair/scalp regions. Hair also has fine structures that are error-prone to noise. Hence, our method utilizes depth denoising to handle the noise issue. On the other hand, the hair regions are relatively easy to locate/segment compared to the hand regions; our two-scale PIFu can be further extended to model the high-frequency details of hair/scalp independently (using a high-quality hair dataset for pre-training).\n\nIn addition, some issues remained to consider when we extend our method to model hair (as well as other body parts) separately. First, by modeling more regions/parts separately, one accompanying challenge is the increased computational cost. Second, with more regions being modeled independently, we may need to determine a better occupancy fields fusion strategy to avoid stitching artifacts.\n\n**Q2: Are the improvements due to better utilization of the parameters in areas that are more geometrically complex (like the face / head)?**\n\nYes, better utilization of parameters in geometrically complex areas is important to the final reconstruction performance. Face/head regions not only contain geometrically complex details but are key factors when we assess the fidelity of human reconstruction. By assigning an individual PIFu for face/head, our method can leverage high-resolution facial input images and the pre-training on the FaceScape dataset for high-fidelity reconstruction.\n\n**Q3: Could this approach be further generalized to dynamic capture of non-humanoid shapes?**\n\nWe thank the reviewer for this interesting question. First, although in this paper our two-scale PIFu method reconstructs the human body frame by frame, it is possible to incorporate temporal constraints to handle the dynamic capture of targets. Second, our two-scale PIFu representation is not limited to modeling human bodies. Hence, it is possible for our method to handle the dynamic capture of non-humanoid shapes. \n\nHowever, there will be a few questions to answer when we apply our method to non-humanoid shapes. For example, we will need to determine whether it is necessary to use the two-/multi-scale PIFu representation (considering the computational cost), and which regions are necessary to model separately. For animals, we probably will need to model their faces/heads separately. For non-humanoid robots, we may probably focus on their functioning parts.\n\n", " Thank you for the reply! We will carefully revise the main paper according to the comments.", " Thank you for the detailed rebuttal. There is some interesting discussion here and visuals here about why modeling the hands separately does not improve the quality as of now. This poses the questions about what other body parts may benefit from this separate modeling processing. For instance, would modeling the hair/scalp separately also prove beneficial? Are the improvements due to better utilization of the parameters in areas that are more geometrically complex (like the face / head)? If so, could this approach be further generalized to dynamic capture of non-humanoid shapes? Answering these questions would make for interesting follow up work.", " Thanks for the detailed responses and clarification! On the condition that the authors will revise the main paper as promised, I would not be opposed to accepting the paper (raised the score to borderline accept). ", " Hi Reviewers,\n\nThe discussion period is closing soon. Please take a look at the responses from the authors. If you have further questions, please ask them now, since the authors will be unable to respond soon. It's substantially more productive, effective, and reasonable to have a quick back-and-forth with authors now than to raise additional questions or concerns post-discussion period that the authors are unable to address. \n\nThanks,\n\nAC", " Dear all reviewers,\n\nWe sincerely thank you for the previous review time and constructive comments. We have provided additional results and explanations of the issues, including our novelty compared to PIFuHD and JIFF, validation of the multi-task designs, textured results, comparison with face reconstruction methods, etc, which we believe have addressed your main concerns. We will revise our paper accordingly and further improve our supplementary materials. If you have any further questions, please let us know.\n\nThanks,\n\nAuthors\n", " We thank the reviewer for the constructive comments and address the raised concerns below.\n\n**Q1: The method seems to be too complex, and more like stacking existing methods and modules, limiting the novelty of the paper.**\n\nOur novelty lies in two folds. First, unlike previous methods that typically assume noise-free input, we propose to handle human reconstruction from sparse and noisy RGBD images input by formulating the depth denoising and 3D reconstruction as a multi-tasking learning process. Second, unlike previous methods that typically use all-in-one PIFu to represent the whole body, we propose the two-scale PIFu method to represent the face and body separately, allowing much more details to be reconstructed.\n\n**Q2: The writing needs significant improvement ...**\n\nWe thank the reviewer for these helpful suggestions. Will revise our paper accordingly.\n\n**Q3: Since the method deals with face and body separately, it would be good if the comparison with face reconstruction methods is included in the experiments.**\n\nAs suggested, we have compared our method to the state-of-the-art face reconstruction method. Visual comparisons can be found in Fig. E of (https://sites.google.com/view/twoscale), which shows that our method produces competitive face reconstruction results. Compared to these methods that explicitly reconstruct the face model, our PIFu-Face implicitly predicts the face occupancy fields, which facilitates our face-body occupancy fields fusion scheme for full-body reconstruction. \n\n**Q4: The related work could be expanded...**\n\nWe will cite and discuss the suggested works and include more representative video-based human reconstruction methods.\n\n**Q5: Many modules are used in the pipeline. Which of them are completely novel? Which are adapted from existing works?**\n\nOur method mainly consists of a geometry-aware PIFu-Body branch, a high-resolution PIFu-face branch, and an occupancy fusion scheme.\n\nThe geometry-aware PIFu-Body branch is built upon the PIFu [70] representation. We extend it to formulate multi-task learning of depth denoising and 3D reconstruction. Besides, we propose the cross attention module (CAM, see Fig. 4 of our paper) to guide the RGB and depth feature fusion. The CAM module is built upon the non-local model [85] (which was originally proposed for handling RGB inputs), and we extend it to model the non-local correlation across RGB-D inputs. We also propose the geometry aware module (GAM, see Fig. 5 of our paper), which to our knowledge is not built upon any existing works, in order to enrich the high-frequency geometric information of the fused RGBD features. \n\nThe high-resolution PIFu-face branch is also built upon the PIFu [70] representation. However, we focus on the estimation of face occupancy conditioned on the high-resolution RGB image and denoised depth information. For our two-scale PIFu, we present a new training method compared to the original PIFu [70] method, as stated in our supplemental (Sec. A).\n\nOur face-to-body occupancy fields fusion, which aims to avoid the stitching artifacts in the implicit occupancy fields, is tailored for our two-scale PIFu method.\n\n**Q6: How are the parameters selected?**\n\nWe have discussed implementation details in the supplemental (Sec. A), including the hyper-parameters.\n\n**Q7: The authors have discussed the limitations. However, it is unclear whether the method works in some challenging scenarios. E.g., loose dressing, eye glasses, etc.**\n\nAs suggested, we show two examples in Fig. B of (https://sites.google.com/view/twoscale). Results show that our method is able to handle the loose dressing and respond to the boundary of eyeglasses wearing to some extent.", " We thank the reviewer for the careful reading and the insightful comments. We are glad to see that the reviewer comments our work has impressive results. We will cite the JIFF paper and revise the paper details. Below we address the raised concerns.\n\n**Q1: While the paper argues that the multi task learning of depth refinement and 3D reconstruction is the key contribution, ... Without such an experiment, the key contribution of this work remains unclear.**\n\nAs suggested, we evaluate our multi-task design by using a sequential framework that first performs the depth denoising and then the human reconstruction using two individual networks. The two networks are trained individually. For depth denoising, we use an ablation version of our full model, by removing the PIFu-body MLP and the occupancy supervision. For human reconstruction, we remove the depth supervision and compute the trunc-TSDF from the input depth maps.\n\nWe report the errors of mesh (P2S and Chamfer distance), normal (L2 and Cosine distance), and the refined depth map (L1 distance) on our test set in Tab. A of link (https://sites.google.com/view/twoscale). We can see that the performance of the suggested sequential method is lower than that of our method (i.e., multi-task manner).\n\nComparing the refined depths and normals of the sequential model with our model (Fig. D in https://sites.google.com/view/twoscale), we find solely depth denoising tends to produce incorrect geometric details (e.g., error depths and normals.), which further results in geometric errors in the reconstruction process. This experiment verifies our multi-task design. We will add these quantitative and qualitative results in the final version.\n\n**Q2: This paper misses one highly relevant paper, JIFF [Cao et al. CVPR'22], where PIFu-like model predicts body and face regions separately and fuse them to produce high-quality reconstruction (although they are based on RGB inputs). Given the existence of this work, the contribution of two-scale PIFu would diminish.**\n\nOur two-scale PIFu method differs from the PIFu-like model of JIFF [Cao et al. CVPR'22] in two-folds. \n\nFirst, JIFF still relies on the 3D morphable face model (3DMM) as face prior. Their face reconstruction capacity is constrained by the 3DMM parameters so that their method may not handle dense facial geometry well (we can see from Fig.4 of JIFF paper that their facial regions are still smoothed). Our method does not rely on 3DMM but leverages high-resolution RGB information to enrich local facial geometry information. \n\nSecond, JIFF fuses the face and body MLP features in the feature domain and uses another MLP to produce the whole-body occupancy. This strategy of early implicit fusion may make it challenging for the final MLP to express high-frequency facial details. Our two-scale PIFu uses two MLPs to estimate the face and body occupancy fields, and explicitly fuses the two fields, which assigns each PIFu with more capacity to reconstruct high-frequency details. \n\nWe will cite and discuss this paper in our revision. Besides, our work is concurrent to JIFF [Cao et al. CVPR'22]. We did not see this paper when we submitted our paper to NeurIPS 2022.\n\n**Q3: Please clarify how to get the input mask.**\n\nAs described in our supplemental (Sec. A), we use the background-matting-v2 [7] method to obtain the body binary masks. For high-resolution facial RGB images, we use the RetinaFace [2] method to detect the front-view face region, and a face skin segmentation network [3] to obtain the facial binary mask.\n\n**Q4: I would recommend not using different background color for ours in the qualitative results. L157: I' and D' do not appear anywhere else. L30: minimally clothed. L297: not small?**\n\nWe thank the reviewer for careful reading. Will revise the paper accordingly.\n\n**Q5: The paper does not discuss limitations or its societal impact. Discussion would be highly recommended.**\n\nWe have discussed the limitations in Sec.5 of our paper. We observed that our two-scale PIFu method still may not handle hands well. Due to the fact that hands have high degrees of movement freedom in the 3D space, they are difficult to locate and reconstruct from the front view.\n\nWe have discussed the societal impacts in our supplemental (Sec. D) that our method may be used to create malicious applications based on 3D reconstruction. We will state them clearly in the revision.\n", " We are glad to see the reviewer comment on our work achieving impressive results. We will cite the PaMIR paper. In the following, we address the raised concerns in the review comments.\n\n**Q1: The novelty of this paper is a bit of weak. ... I would regard this difference as engineer choices instead of a \"technical contribution\".**\n\nThe PIFuHD [71] method extends PIFu [70] representation in a coarse-to-fine manner. Hence, PIFuHD [71] still suffers from lacking correct facial details. Our motivation for designing two-scale PIFu is to reduce the complexity of full-body occupancy fields estimation by modeling face and body separately. This strategy further allows our method to leverage the existing high-resolution face dataset (e.g., FaceScape [89] ) to model high-frequency details in the face part. \n\n**Q2: No textured results are demonstrated. The authors are encouraged to present the textured models because it is a more intuitive way to evaluate the geometric reconstruction accuracy.**\n\nWe thank the reviewer for this suggestion. Textured results are provided in Fig. A and Fig. B via link (https://sites.google.com/view/twoscale). Here we used the MVS-Texturing [Waechter et al. ECCV'14] to generate textured meshes from the original captured RGB images. It can be seen that our generated texture results are of high quality and well aligned with our reconstructed geometries. We will add them in the revision.\n\n**Q3: To evaluate the multi-task design, the authors simply remove the depth denoising task. I think a better baseline for this experiment would be a sequential framework where the depth maps are firstly refined by the depth denoising module and then fed into the 3D reconstruction network.**\n\nAs suggested, we evaluate our multi-task design by using a sequential framework that first performs the depth denoising and then the human reconstruction using two individual networks. The two networks are trained individually. For depth denoising, we use an ablation version of our full model, by removing the PIFu-body MLP and the occupancy supervision. For human reconstruction, we remove the depth supervision and compute the trunc-TSDF from the input depth maps. \n\nWe report the errors of mesh (P2S and Chamfer distance), normal (L2 and Cosine distance), and the refined depth map (L1 distance) on our test set in Tab. A of link (https://sites.google.com/view/twoscale). We can see that the performance of the suggested sequential method is lower than that of our method (i.e., multi-task manner).\n\nComparing the refined depths and normals of the sequential model with our model (Fig. D in https://sites.google.com/view/twoscale), we find solely depth denoising tends to produce incorrect geometric details (e.g., error depths and normals.), which further results in geometric errors in the reconstruction process. This experiment verifies our multi-task design. We will add these quantitative and qualitative results in the final version.\n\n**Q4: I am skeptical about the experiments in Fig. 3. ... I think the authors should discuss more about the experiment in Fig. 3 and provide more results if possible.**\n\nSince the input RGB images for the depth denoising task contain edges and object boundaries which are clues to recovering the local geometric details, we observed that the denoised results had such local details. However, there might exist errors in the details reconstructed by the denoising task. For example in Fig. 3 (g), although the facial region looks sharper than that in Fig. 3 (h), some local geometric details are incorrect. These errors can be further mitigated by the 3D reconstruction task. Since our Geometry-aware Features are used for both depth denoising and 3D reconstruction tasks, the two tasks can benefit each other to further improve the details of reconstruction and refined depth, as shown in Fig. 3(h, j, f). Moreover, the 3D reconstruction task also provides global topology guidance for the occupancy fields. To avoid possible confusion, We plan to add a description after Line 50 in our paper to explain that the 3D reconstruction task can work with denoising to further improve the quality of the reconstruction results.\n\nBesides, we show more qualitative results in the link (https://sites.google.com/view/twoscale). As shown in Fig. D (b,c,d), solely depth denoising tends to introduce incorrect details, resulting in larger errors for normals compared to our multi-task model. We will add these qualitative results in the revision.\n\n---\n\nMVS-Texturing : Waechter M, Moehrle N, Goesele M. Let there be color! Large-scale texturing of 3D reconstructions[C]//European conference on computer vision. Springer, Cham, 2014: 836-850.\n", " We are delighted to see that Reviewer wHR5 comments on our work defining an interesting case for PIFu. We will revise our figures and font sizes accordingly and address the raised concerns below.\n\n**Q1: The approach is just taking some rather straightforward improvements onto the existing PiFU HD, so the technique itself isn't particularly original.**\n\nOur method and PIFuHD [71] both use two PIFu-based modules to predict the occupancy fields. However, our method is fundamentally different from PIFuHD in two folds.\n\nFirst, PIFuHD [71] designs two-scale PIFus in a coarse-to-fine manner, where the face and body occupancy fields are still jointly estimated. Hence, PIFuHD suffers a similar problem of the incorrect flat facial surface as that of PIFu [70]. Our two scale-PIFu uses two MLPs to estimate the face and body occupancy fields separately, which assigns each PIFu with more capacity to reconstruct high-frequency details.\n\nSecond, unlike the PIFuHD [71], we aim to address the problem of human reconstruction from sparse noisy RGBD sensors. Accordingly, we formulate depth denoising and 3D reconstruction as a multi-task learning process.\n\n**Q2: One weakness is the lack of qualitative examples describing the weaknesses noted in Section 5. It would be nice as a reader for the paper to demonstrate visually why hands remain challenging.**\n\nWe thank the reviewer for this suggestion. Two visual examples are provided in Fig. C via link (https://sites.google.com/view/twoscale). We can see that the geometry-aware two-scale PIFu still struggles to reconstruct high-quality hand details when the hand is occluded or the hand pose is complex. In addition, the large depth noise and less RGB information for hands will also make it challenging to obtain high-frequency hand details.\n\n**Q3: For the limitations section, did the authors try to train a 4-scale occupancy model with separate occupancy models to model each hand?**\n\nWe haven't successfully trained a 4-scale PIFu model to reconstruct each hand additionally. In our two-scale PIFu, we model the face surface from only the front view RGBD since the front image contains enough details for the face reconstruction. \n\nIn contrast, hands have more degrees of movement freedom in the 3D space, which makes them harder to locate and reconstruct from only one view based on PIFu [70]. Furthermore, a high-quality hand dataset needs to be prepared to pre-train the 4-scale PIFu, which is also a time-consuming task. \n\nIn another way, combining hand parametric models with our two-scale PIFu may be a feasible solution for hand modeling, and we will explore the topic in the future. \n\n", " This paper proposed several additions onto the PIFU architecture that makes it perform better when trained and evaluated with noisy sparse RGBD views. The paper has three major contributions. 1. A two scale PIFU model to render detail at different scales and to focus on the face of the human avatar. 2. An extension of PIFU to take in noisy RGBD data from real world sensors. 3. A demonstration of the results on 3 Kinects The paper is very well written. It provides competitive qualitative and quantitative results with state of the art. The paper is very well written and include useful figures and diagrams. One useful novel contribution their method includes merging the occupancy maps of the face and body. I think this might be the most general contribution in the paper as it stands. The separate modeling of the face and body with their own occupancy fields might also be a good way to optimize the number of parameters needed for high fidelity body construction. The approach is just taking some rather straightforward impovements onto the existing PiFU HD, so the technique itself isn't particularly original. However, it does define an interesting use case for PiFU, sparse RGBD reconstruction with Kinects and demonstrates noticeable improvements on it. I will note that I am not extremely familiar with the latest literature surrounding PiFU so may be missing some related work here, but the the contributions are somewhat novel and useful to future work on digital humans.\n\nOne weakness is the lack of qualitative examples describing the weaknesses noted in Section 5. It would be nice as a reader for the paper to demonstrate visually why hands remains challenging. * Some of the text on the figure is so small it can be hard to read when written out.\n* In Figure 9, what does \"Ours\" have a different background\n* For the limitations section, did the authors try to train a 4 scale occupancy model with seperate occupancy models to model each hand? It sounds like the authors may have experimented with doing so, but failed to get good results. If so, this information would be very helpful to at least include in the paper's supplement. Yes", " This paper presents a method for reconstructing 3D human models from sparse RGBD inputs. The proposed method is built upon PIFu. To eliminate the negative impact of depth noise, the authors introduce a multi-task learning framework that incorporates the complementary nature of depth denoising and 3D reconstruction. The authors also introduce an additional MLP for facial regions to further improve the reconstruction quality of human faces. Experiments show that the proposed method can generate high-quality 3D human models from sparse RGBD sensors. \n Strengths:\n\n* The results look impressive. Given noisy depth measurement from sparse RGBD sensors, the proposed method is able to reconstruct clean 3D models with high-quality geometric details. \n\n* The main contributions of this paper are convincingly validated. The authors present extensive experiments throughout the paper. \n\n* This paper is overall well-written and easy to follow. \n\n\nWeaknesses:\n\n* The novelty of this paper is a bit of weak. In the introduction, the authors claim the two-scale PIFu representation as one technical contribution. However, a similar design has already been proposed in PIFuHD. I understand that the fine-scale network in PIFuHD is still designed for the whole body while the fine-scale MLP in this paper only focuses on the facial region, but I would regard this difference as engineer choices instead of a \"technical contribution\". \n\n* No textured results are demonstrated. The authors are encouraged to present the textured models because it is a more intuitive way to evaluate the geometric reconstruction accuracy. \n\n* To evaluate the multi-task design, the authors simply remove the depth denoising task. I think a better baseline for this experiment would be a sequential framework where the depth maps are firstly refined by the depth denoising module and then fed into the 3D reconstruction network. \n\n* Missing citation: Zheng et al. PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction. IEEE T-PAMI. \n\n******************************************************************\nFinal comments:\nThanks for the authors' response which addresses some of my concerns. After reading other reviews and the rebuttal, I would like to keep my initial rating. The authors should add their additional experimental results into the final paper if the paper gets accepted finally. I am skeptical about the experiments in Fig. 3. In the main text of the paper, the authors emphasis that the task of depth denoising preserves local geometric details while 3D reconstruction provides global topology guidance. However, in Fig. 3 (g), which is the results of solely depth denoising, the refined depth looks much smoother than (h) and does NOT preserve local geometric details. This contradicts with the main insight of this paper and makes me feel that the geometric details is acturally produced in the 3D reconstruction task rather than the depth denoising task. I think the authors should discuss more about the experiment in Fig. 3 and provide more results if possible. \n\n The authors have discussed the limitations in Sec. 5 and the potential societal impact in the supplement material. No more improvement is needed. \n\n", " This paper presents a 3D reconstruction method of clothed humans from sparse RGBD inputs. This work argues that jointly solving depth refinement and PIFu-based 3D reconstruction is mutually beneficial with empirical evidence. For effective information fusion between RGB and depth, the paper also propose cross attention module and geometry aware module. Additionally, the paper further improves the fidelity of face reconstructoin by processing face and body separately and fused them back into a single 3D model. With THuman2.0 dataset, the proposed approach outperforms existing approaches and the ablation study validates their technical contributions.\n The strengths of the paper can be summarized as follows:\n- The qualitative results are impressive. Given noisy RGB-D inputs, the proposed method successfully generates high-resolution and complete 3D geometry of clothed humans. \n- The paper presents extensive experiments to show the effectiveness of the proposed approach over the existing approaches. \n- The proposed rgb and depth fusion module are interesting and shows improvements in performance. \n\nHowever, there are several weaknesses in this work:\n- While the paper argues that the multi task learning of depth refinement and 3D reconstruction is the key contribution, the experiments do not necessarily support this claim. More specifically, it is not clear if solving them together as multi-task is critical or simply providing cleaner depth map is beneficial for the following steps. To validate this, I would recommend running experiments by feeding refined depth that is separately predicted as isolated module into the proposed pipeline. Without such an experiment, the key contribution of this work remains unclear.\n- This paper misses one highly relevant paper, JIFF, where PIFu-like model predicts body and face regions separately and fuse them to produce high-quality reconstruction (although they are based on RGB inputs). Given the existence of this work, the contribution of two-scale PIFu would diminish. \n\nJIFF: Jointly-aligned Implicit Face Function for High Quality Single View Clothed Human Reconstruction\nYukang Cao, Guanying Chen, Kai Han, Wenqi Yang, Kwan-Yee K. Wong, CVPR 2022\n\nWhile the results are impressive, the extent of the paper contributions remain unclear with the current form. Thus, I would stay on the fence, slightly inclined towards rejection. Please provide evidence that multi-task learning is beneficial, not depth refinement itself.\n\nOther comments:\n- I would recommend not using different background color for ours in the qualitative results. \n- L157: I' and D' do not appear anywhere else.\n- Please clarify how to get the input mask.\n- L30: minimally clothed\n- L297: not small? The paper does not discuss limitations or its societal impact. Discussion would be highly recommended. ", " The paper propose a method for human 3D reconstruction. The main contribution includes: 1. a geometry-aware PIFu method for human reconstruction from RGBD images; 2. a two-scale method dealing with body and face separately and then fusing the results; 3. experiments results showing the effectiveness on Kinect captured data. Strengths:\n\n+ The experimental results are good. Clothing deformation and facial features are reconstructed better than compared methods.\n\n+ The ablation study is extensive, showing the necessity of each proposed module.\n\nWeaknesses:\n\n- The method seems to be too complex, and more like stacking existing methods and modules, limiting the novelty of the paper.\n\n- The writing needs significant improvement: (1) The pipeline figure 2 needs to be simplified. Showing too many details in one single figure makes readers get lost. (2) The math should be simplified, too. Instead of defining a lot of symbols, it would be good if the author could describe the key idea (or even for a simplified pipeline) using very simple equations. Complex details could go to the supplementary material. (3) I would not recommend using \"\\copyright\" as the concatenation operation, because it already has other meanings. ‖ could be a better choice.\n\n- Since the method deals with face and body separately, it would be good if the comparison with face reconstruction methods is included in the experiments.\n\n- The related work could be expanded. Here I listed some relevant papers, but the author could include more:\n[a] Real-time Deep Dynamic Characters, SIGGRAPH'21;\n[b] Texmesh: Reconstructing detailed human texture and geometry from rgb-d video, ECCV'20;\n[c] MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video, 3DV'20 1. Many modules are used in the pipeline. Which of them are completely novel? Which are adapted from existing works?\n2. How are the parameters selected? The authors have discussed the limitations. However, it is unclear whether the method works in some challenging scenarios. E.g., loose dressing, eye glasses, etc." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 3 ]
[ "PCRog7lu3-b", "RaiqQns4bv", "9N7OAmGcVLS", "9DUknTGnLDN", "kibxePx4bYu", "nips_2022_jtq4KwZ9_n9", "nips_2022_jtq4KwZ9_n9", "9GRJBmwdyG", "jvKW01tNPXr", "GlE7Ts8TK_E", "kq7o7Eop1v", "nips_2022_jtq4KwZ9_n9", "nips_2022_jtq4KwZ9_n9", "nips_2022_jtq4KwZ9_n9", "nips_2022_jtq4KwZ9_n9" ]
nips_2022_SLdfxFdIFeN
A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective
We propose the first unified theoretical analysis of mixed sample data augmentation (MSDA), such as Mixup and CutMix. Our theoretical results show that regardless of the choice of the mixing strategy, MSDA behaves as a pixel-level regularization of the underlying training loss and a regularization of the first layer parameters. Similarly, our theoretical results support that the MSDA training strategy can improve adversarial robustness and generalization compared to the vanilla training strategy. Using the theoretical results, we provide a high-level understanding of how different design choices of MSDA work differently. For example, we show that the most popular MSDA methods, Mixup and CutMix, behave differently, e.g., CutMix regularizes the input gradients by pixel distances, while Mixup regularizes the input gradients regardless of pixel distances. Our theoretical results also show that the optimal MSDA strategy depends on tasks, datasets, or model parameters. From these observations, we propose generalized MSDAs, a Hybrid version of Mixup and CutMix (HMix) and Gaussian Mixup (GMix), simple extensions of Mixup and CutMix. Our implementation can leverage the advantages of Mixup and CutMix, while our implementation is very efficient, and the computation cost is almost neglectable as Mixup and CutMix. Our empirical study shows that our HMix and GMix outperform the previous state-of-the-art MSDA methods in CIFAR-100 and ImageNet classification tasks.
Accept
This work proposes a theoretical analysis and unified specification for mixed sample data augmentation methods. The reviewers praise the extensive theoretical analysis as well as the strong empirical results in the paper. The authors and reviewers engaged in substantial discussion, which led multiple reviewers to revise their assessment of the paper upwards. I can therefore recommend accepting this paper.
train
[ "5grKWScerZa", "fSsmFbay6O", "jlS5xNckvzXo", "2P6a0yZcbeR", "nlkvg5XWO_", "VX5mjdjUVX", "G6EzvP5tl-y", "dRxBvFlHvvVi", "Xevx7iccE9-", "n1vw6OieRja", "7tiJel2m9AY", "YwHPm7xRk0v", "ei5ImxsJN0p", "lo9Dh4O6f03", "IjVRAOK1Pbk" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the rebuttal. \nThe overall results look good to me.\n", " We are happy to hear that our revision version of the paper clarifies our contribution. Thanks for the recommendation of the state-of-the-art MSDA papers. We note that we already mentioned [1,2, 4-8] in the original paper. We added the discussion in Appendix J that dynamic MSDA could or not be applied to our theorems, including [1-8]. \n\n[1] Kim et al., Co-mixup: Saliency guided joint mixup with supermodular diversity. ICLR 2021.\n\n[2] Uddin et al. Saliencymix: A saliency guided data augmentation strategy for better regularization. ICML 2021.\n\n[3] Hong et al., Stylemix: Separating content and style for enhanced data augmentation. CVPR, 2021.\n\n[4] Liu, et al. AutoMix: Unveiling the Power of Mixup for Stronger Classifiers. ECCV, 2022.\n\n[5] Kim et al. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. ICML, 2020\n\n[6] Verma, et al. Manifold mixup: Better representations by interpolating hidden states. ICML 2019\n\n[7] Qin et al., Resizemix: Mixing data with preserved object information and true labels. arXiv preprint arXiv:2012.11101.\n\n[8] Harris et al., Fmix: Enhancing mixed sample data augmentation. ICLR, 2021.", " Thank you for the rebuttal.\nThe overall results look good to me now.\nHowever, the authors have missed citing several state-of-the-art mixup works such as Co-Mixup [1], SaliencyMix [2], StyleMix [3], AutoMix [4], etc. It would be nice to discuss these papers in the related work section and I will raise my score. It is best to compare them if possible.\n\n[1] Kim et al., Co-mixup: Saliency guided joint mixup with supermodular diversity. ICLR 2021. \n\n[2] Uddin et al. Saliencymix: A saliency guided data augmentation strategy for better regularization. ICML 2021.\n\n[3] Hong et al., Stylemix: Separating content and style for enhanced data augmentation. CVPR, 2021.\n\n[4] Liu, et al. AutoMix: Unveiling the Power of Mixup for Stronger Classifiers. ECCV, 2022.", " Thank you for preparing the rebuttal.\n\nThe overall results look pretty good to me know. I'll raise my score to 7.\n\nOne minor concern:\n\n1. FGSM is quite an old attack, and I suggest using AutoAttack[1] instead.\n\n[1] Croce, F., & Hein, M. (2020, November). Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning (pp. 2206-2216). PMLR.\n\n", " Dear Reviewers,\n\nWe deeply appreciate your reviews. Thank you for having time to read our manuscript and giving good advice. \n\nFirstly, we want to emphasize the novelty and contribution of our work. Our paper provides a unified analysis of MSDA, including Mixup and CutMix, and this is the first work that analyzed the different effects of MSDA with a theoretical lens to the best of our knowledge. We provided the hypothesis that Mixup or CutMix will perform well with our theorems. We also provided new methods and experiments to validate our theorem. \n\nSecond, we revised our paper threefold: \n\n(1) We added some experiments to support our theorem. Moreover, we added the robustness benchmark (e.g., Imagenet-C, ImageNet occlusion, adversarial attack) to provide more abundant examples of when Mixup or CutMix perform well. Lastly, the validation for the second order Taylor expansion is also provided with experiments. \n\n(2) We changed the description of Figure 4. In the previous version, the explanation of Figure 4 was insufficient, as mentioned in the review. We make the description of Figure 4 clear. \n\n(3) We changed the minor errata in the paper.\nThe revised paper is attached in the supplementary material. \n\nIf further inquiries come up within the rebuttal period, we would be pleased to talk with the reviewer again!\n\nBest,\n\nAuthors\n", " We thank Reviewer phJK for positive comments and constructive suggestions for improving our work. We are also pleased to hear that the reviewer found (1) our theoretical result is the first study to explain why MSDA works (2) our theoretical results can show how Mixup and CutMix are different in terms of gradient and Hessian regularization (3) our theoretical analysis is consist with the proposed HMix and GMix (4) our paper is well-written.\n \nWe addressed all concerns raised by the reviewer and revised our paper accordingly.\n \n\n### No validation of second-order Taylor approximation (W1, Q1)\nThanks for the constructive feedback. We also agree that our theorem can be better if we can validate the second-order Taylor approximation. Hence, we validate the second-order Taylor approximation following Zhang et al. [R1]. We employ the same toy dataset of Zhang et al. and employ two simple MSDA methods: (1) randomly and independently chooses its mask either 0 or 1 for each input dimension (2) a simple Mixup. We visualize the approximated loss functions of the employed MSDA methods and the true MSDA loss functions. We add the figures in Figure 1 of the revised version of paper. In the figure, we can observe that the approximation gaps between the true loss functions and the approximated loss functions are small as also observed by Zhang et al. \n\n### Extension to other MSDA methods (ResizeMix, FMix, PuzzleMix)?\nOur theorems can be applied to any MSDA method with an analogous formula, regardless of the assumption of the shape of the mask. In this paper, we mainly focused on Mixup and CutMix because they are the most common MSDA methods among the whole MSDA family as well as their behaviors are distinctly different in terms of our theorem. \n\n<ResizeMix>\n\nResizeMix can be explained by our theorem if we add the assumption on the dataset $\\mathcal{D}_X$ that $\\mathcal{D}_X$ has all resized versions of the image. ResizeMix uses the resized version of input (i.e., one of the mixed patch is the “resized” version, not a cropped one) where the random resize is applied to the whole dataset. In other words, ResizeMix is a special case of CutMix when we apply a special version of random resize crop operation. Hence, if we assume a different version of random resize crop rather than the standard version (which is independent to our theoretical results and underlying assumptions), ResizeMix is equivalent to CutMix that leads to the same theoretical result to CutMix.\n\n<FMix>\n\nFMix randomly samples the mask from the Fourier space. Since FMix is one of the static MSDA, we can directly apply our Theorem 1-4. \n\n<PuzzleMix>\n\nPuzzleMix samples the mask depending on the saliency map of the given input. Therefore, PuzzleMix is a dynamic MSDA, while we can apply our Theorem 1, but it is not straightforward to interpret what is the meaning of regularizer. \n\n\n### Other comments (W3, W2)\n- No $\\alpha_{ij}$ of HMix (W2): Thanks for the comment. We added it in the Appendix.\n- No negative societal impacts: Sorry for the mistake. We added the negative societal impacts in the revised Appendix.\n\n[R1] Zhang, Linjun, et al. \"How does mixup help with robustness and generalization?.\" ICLR 2021\n", " \n### HMix and GMix do not consider label mismatch problem\nWe first emphasize that our theoretical results are invariant to the existence of the label mismatch problem; there is no assumption on data points, there is no assumption on mask. Moreover, **our MSDA loss function formulation depends on the distribution of the mask, not the individual mask sampling** for each data augmentation step. For example, one can imagine an extreme case of label mismatch such as mixing two backgrounds without any object. However, our mask distribution also allows other cases, such as two images mixed well with proper labels. In other words, some individual mask sampling can suffer from extreme label mismatch problems, but it is not a problem of mask distribution itself.\n \nMore specifically, our approximated loss function (6) always holds although some individual mask sampling suffers from label mismatch problems. Moreover, as shown in our theorem, the regularization terms (8) are only determined by the formulation of the mask distribution: $a_{jk} := E_M[(1-M_j)(1-M_k)]$ (9). If we sample masks in a data independent manner, then **$a_{jk}$ is independent of the data points, which means that the label mismatch problem does not change our theoretical derivation**. Therefore, our proposed HMix and GMix do not consider label mismatch problems, and it does not hurt our theoretical results, because label mismatch problem is only dependent on the individual mask sampling.\n \n \n### Both hybrid strategies are data-independent\nIn this paper, we focus on explaining data independent MSDA methods, i.e., $M$ is a random variable only depending on $\\lambda$, but independent to $x$ (L80-81 and Remark 2). As we already emphasized that our proof techniques also can be applied to the dynamic MSDA methods because our theoretical analysis are invariant to the choice of $M$ and $N$. We agree that data dependent MSDA could be an interesting research topic, but our study is independent of data dependent mask selection strategy and any data independent mask selection strategy also can enjoy our theoretical results.\n \n### Hyperparameter analysis\nWe thank the reviewer for pointing out the issue that the effect of hyperparameter is not sufficiently discussed. We added the effect of hyperparameters in this rebuttal comment and the revised version of the manuscript. \n\n<Impact on $r$ for HMix>\n| PreActRN18 | CIFAR-100 accuracy |\n|-----------------|--------------------|\n| Mixup | 77.21 |\n| CutMix | 78.66 |\n| HMix ($r$=0.75) | **79.43** |\n| HMix ($r$=0.5) | 79.25 |\n| HMix ($r$=0.25) | 78.05 |\n\n<Impact on $\\alpha$ for HMix>\n| PreActRN18 | CIFAR-100 accuracy |\n|---------------------------------|--------------------|\n| Mixup | 77.21 |\n| CutMix | 78.66 |\n| HMix ($\\alpha$=0.5, $r$=0.5) \t | 78.85 |\n| HMix ($\\alpha$=1.0, $r$=0.5) | **79.25** |\n| HMix ($\\alpha$=2.0, $r$=0.5) | 78.47 |\n\n<Impact on $\\alpha$ for GMix>\n| PreActRN18 | CIFAR-100 accuracy |\n|----------------------|--------------------|\n| Mixup | 77.21 |\n| CutMix | 78.66 |\n| GMix ($\\alpha$=0.25) | 78.60 |\n| GMix ($\\alpha$=0.5) | **79.17** |\n| GMix ($\\alpha$=0.75) | 78.64 |\n| GMix ($\\alpha$=1) | \t79.05 |\n \n\n[R1] Lee, Jin-Ha, et al. \"Smoothmix: a simple yet effective data augmentation to train robust classifiers.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020.\n\n[R2] Verma, Vikas, et al. \"Manifold mixup: Better representations by interpolating hidden states.\" International Conference on Machine Learning. PMLR, 2019.\n\n[R3] Uddin, A. F. M., et al. \"Saliencymix: A saliency guided data augmentation strategy for better regularization.\" International Conference on Learning Representations, 2021.\n \n", " We thank the reviewer for their time and suggestions for improving our submission. We are happy to hear that the reviewer found the paper well-organized. \n \nThe main goal of our paper is to understand various MSDAs from the optimization perspective and to provide a generalized loss function form. We also have shown that there is no one-fit-all solution for MSDA, resulting in proposing a hybrid version of popular MSDA methods, i.e., Mixup and CutMix. We address the reviewer’s comments below.\n \n \n### Comparison with SmoothMix (W1 and Limitation 1)\nOur paper focuses on theoretical contribution (a unified theoretical framework for general MDSA). HMix, GMix are our new methods, but we propose the methods for supporting our theoretical results (as the reviewer also agreed), rather than proposing a novel data augmentation method. On the other hand, SmoothMix [R1] focuses on alleviating the strong-edge effect; while our HMix and GMix focus on leveraging the advantages of both CutMix and Mixup with a theoretical understanding. Even though our method can look similar to SmoothMix, because the main motivation of the papers and the conclusion are completely different, we argue that SmoothMix does not weaken our contribution, such as unified theoretical analysis of MSDA and empirical analyses for supporting our main theorem.\n \nWe also point out the main differences between our proposed methods and SmoothMix below, so we claim that the proposed algorithms and SmoothMix are entirely different. \n- SmoothMix-C focuses on alleviating the strong-edge effect. Therefore, the mask value gradually decreases with rectangular shapes. However, HMix focuses on leveraging the advantages of CutMix and Mixup, so the mask of HMix does not gradually decrease. Moreover, the parameter of SmoothMix-C ($\\sigma$) is uniformly selected in the original paper [R1], but we firstly select $\\lambda$, which is the mix ratio of two samples in Beta distribution as Mixup and CutMix. Second, HMix can change the hybrid ratio with $r$, which also has Beta distribution, while SmoothMix-C has a fixed value of $k$ for alleviating the strong-edge effect. We think this is a significant difference between SmoothMix-C and HMix since HMix can recover both Mixup and CutMix by selecting the parameter of the Beta distribution. As we mentioned in our paper, the performance of MSDA depends on the specific task or domain, and our theorems also support it, so we should consider diverse situations and use several candidates between Mixup and Cutmix. Therefore, while SmoothMix-C is also a great idea, intuition is completely different. \n- SmoothMix-S and GMix also differ in parameter selection. The parameter of SmoothMix-S ($\\sigma$) is also selected from the uniform distribution, while GMix selects parameters from Beta distribution as Mixup and CutMix. \nWe think this parameter selection issue is important since we directly control the mixing ratio distribution with Beta distribution, but in SmoothMix, more focuses on alleviating the strong-edge effect, which can not directly control the distribution of the mixing ratio. Controlling the mixing ratio is also an empirically important issue, as alleviating the strong-edge effect. Therefore, these two algorithms are different. \n \n### Not enough comparison methods (W2 and Limitation 2)\nWe add two additional comparison methods (ManifoldMixup and SaliencyMix). Every experiment is done with the same hyperparameter from each paper.\n \n1. Our paper mainly focuses on pixel-level MSDA (pixel-level mix operations), while ManifoldMixup [R2] is a feature-level MSDA. Note that, we also have shown that feature-level MSDA also guarantees the generalization bounds in Theorem 4. We additionally report the ManifoldMixup result. \n2. We also mainly focus on data independent MSDA (e.g., Mixup and CutMix), while SaliencyMix [R3] is a data dependent method. Note that our main Theorems also hold for a data dependent MSDA, but we do not analyze data dependent MSDA methods because the meaning of loss function is not clear for data dependent MSDA; because the second equality of (7) and (8) do not hold anymore, it is hard to interpret the approximated loss function as an input gradient / Hessian regularizer. We show the SaliencyMix results in the below table.\n\n| PreActRN18 | CIFAR-100 accuracy |\n|----------------|--------------------|\n| Mixup | 77.21 |\n| CutMix | 78.66 |\n| SaliencyMix | 77.84 |\n| Manifold Mixup | 78.63 |\n| HMix | **79.25** |\n| GMix | **79.17** |\n \n", " We thank the reviewer for the constructive and positive reviews. We are happy to hear that Reviewer 4bQe agree with that (1) our generalized loss formulation and analysis help overall understanding of how MDSA scheme works (2) our general loss function is a significant contribution and should be helpful for other MSDA methods (3) HMix, GMix results support our claim (4) our paper is well written and clear.\n \nWe answer all questions raised by the reviewer and revise the manuscript accordingly.\n \n \n### Q1. Can the proof be self-contained?\nWe agree with the reviewer. We revised our main manuscript to self-contain the main proof sketch, for example:\n- Proof sketch for Theorem 1: Using the definition of $z_{ij}$ and using the fact that the Binomial distribution and Beta distribution are in the conjugate, we can reformulate $L_m^{(MSDA)}$. In the process of reformulating $L_m^{(MSDA)}$, we should define $D_\\lambda$. Then, we can make a quadratic Taylor approximation of the loss term. Here, $E_{r_x}[r_x]=0$ is used for not only the simplicity of the results, but also for the fact that using normalization in the dataset. Details can be found in Appendix.\n- Proof sketch for Theorem 3: Defining adversarial loss function and using second order taylor expansion, we can prove that adversarial loss is less than MSDA loss.\n- Proof sketch for Theorem 4: MSDA regularization can be altered to the original empirical risk minimization problem with a constrained function set, and calculating Radamacher complexity of this function set gives the theorem.\nThe revised explanations are in Section 3 of the revised Supplementary Material.\n \n \n### Q2. Why we need $E_{r_x}[r_x] = 0$ assumption?\nWe assume $E_{r_x}[r_x] = 0$ condition (i.e., the mean of the given data points is 0) due to the simplification of the theoretical results. With a simple modification of the proof, we can get the almost same result but the coordination is parallelly shifted by the mean of the dataset. In practice, one can easily make a 0-centered dataset by simply moving all data points that have a mean of 0. Note that we actually make a zero-centered dataset for training deep neural networks by subtracting the mean of the dataset to a stable training (e.g., `Normalize(mean, std)` operation in torchvision package).\n \n### Minor issues:\n- We fixed typos in the revised paper.\n- Thanks for enjoying our paper regarding n-sample DA. We included a more detailed description about n-sample DA in the main manuscript (revised Supplementary Material L147-L149) for better understanding. Thanks for the comment.\n \n \n", " \nWe additionally report robustness benchmarks, where CutMix and Mixup behave in a significantly different way; CutMix shows strong occlusion robustness (e.g., when an image is randomly occluded) and Mixup shows strong corruption robustness (e.g., when a Gaussian noise is added to an image [R1]) as shown by Chun et al [R2]. The following table shows the results on ImageNet standard accuracy, ImageNet FGSM accuracy (adversarial attack), ImageNet-occ accuray (center occluded images following Yun et al. [R3]), and ImageNet-C accuracy (Corrupted images by 15 corruptions proposed by Hendrycks et al. [R1]). This table is in the revised version of Appendix I.2. \n \n| Augmentation Method | ImageNet-1K | ImageNet-occ | ImageNet-C | FGSM |\n|---------------------|:---------------:|:-----------------------:|:----------------------:|:-----------------------:|\n| Vanilla (no MDSA) | 75.68 (+0.00) | 55.26 (+0.00) | 42.57 (+0.00) | 8.55 (+0.00) |\n| Mixup | 77.78 (+2.10) | 60.34 (+5.08) | **51.73 (+9.16)** | 27.78 (+19.23) |\n| CutMix | 78.04 (+2.36) | **71.51 (+16.25)** | 44.18 (+1.55) | 33.63 (+25.08) |\n| HMix | **78.38 (+2.70)** | **71.13 (+15.87)** | **46.37 (+3.80)** | **34.98 (+26.44)** |\n| GMix | **78.13 (+2.45)** | 62.76 (+7.50) | 45.97 (+3.40) | 31.02 (+21.47) |\n \n \n \n- In the case of the image dataset being corrupted by noises or blurs (ImageNet-C), we noticed that Mixup performs better than CutMix. This phenomenon can be explained by our theoretical results too. The ImageNet-C style corruption is globally applied to the image regardless of the content of the original image. In this case, the longer-distance relationships are significantly damaged and useless to distinguish the object, hence the shorter-distance relationships are more important. As HMix and GMix less weigh shorter-distance relationships than Mixup (but more weight than Mixup), we can observe that HMix and GMix ImageNet-C performances are better than CutMix, but worse than Mixup.\n- In the case of the image dataset having occlusions, we noticed that CutMix performs better than Mixup. In this case, the occluded areas have no information to distinguish objects, but only local areas are informative. Hence, it is important to capture shorter-relationship rather than global-relationship. As Scenario 1 in page 8, we can expect that CutMix is better than Mixup in this case. Not surprisingly, HMix and GMix are located in between CutMix and Mixup.\n \n \n[R1] Hendrycks, Dan, and Thomas Dietterich. \"Benchmarking neural network robustness to common corruptions and perturbations.\" ICLR (2019).\n\n[R2] Chun, Sanghyuk, et al. \"An empirical evaluation on robustness and uncertainty of regularization methods.\" ICML Workshop (2020).\n\n[R3] Yun, Sangdoo, et al. \"Cutmix: Regularization strategy to train strong classifiers with localizable features.\" ICCV (2019).\n", " \nWe thank Reviewer ynng for the thoughtful reviews. We are happy to hear that the reviewer found that (1) our analysis is extensive and our results extend previous theoretical understanding of Mixup to an arbitrary MSDA method, (2) our empirical studies are aligned with our theoretical findings, and (3) the overall writing is good. We address all concerns and questions raised by the reviewer and revise the manuscript accordingly.\n \n \n### Q1. Where is Theorem 4? \nThank you for correcting our mistakes. We correct the numbering of Theorems in the revised version of the Appendix.\n \n\n### Q2. The meaning of Fig 4 including x-axis, y-axis and color bar. & W2. Explanation of Fig 4 & L2. Captions\nEquation (8) shows that the regularization term $a_{ij}$ directly affects to the pixel gradients $|\\partial_i f_\\theta(x_k) \\partial_{j} f_\\theta(x_k)|$ in our approximated loss function. The purpose of Figure 4 is to show how the pixel gradients are actually regularized after training. To show that, we investigate the amount of the regularized input gradients by $|\\partial_v f_\\theta(x) \\partial_{v+p} f_\\theta(x)|$ with respect to the pixel distance vector $p$ for trained models by different MSDA methods. Here, if our approximated loss function actually behaves as a regularization, then we can expect that the pixel gradients $|\\partial_v f_\\theta(x) \\partial_{v+p} f_\\theta(x)|$ is small when $a_{ij}$ is large for the given $p$.\n \nWe first define the partial gradient product as follows: \n$$\\text{PartialGradProd}(x,p) = \\max_{v} |\\partial_v f_\\theta (x) \\partial_{v+p} f_\\theta (x)|$$\nNow, we visualize the pixel-wise maximum values of PartialGradProd(x, p) in Figure 4. We train different models $f_\\theta$ on resized ImageNet (64 x 64) and measure the values on the validation dataset. The x-axis and y-axis of Figure 4 denote the pixel distance $p$ along each x- and y- axis, and the scale of the colorbar denotes the value of the maximum partial gradient product. In the figure, we can observe that CutMix reasonably regularizes effectively in the input gradients products when a pixel distance is small; these results aligned with our previous interpretation, CutMix behaves as a pixel-level regularizer where it gives stronger regularization (larger $a_{ij}$) to the closer pixels.\n \nWe agree with the reviewer that Figure 4 can be re-written better. We changed the description of Figure 4 (L256-L272 in the Supplementary Material), the details of Figure 4 and its caption in the revised version of the paper.\n \n \n \n### W1. The proposed methods show marginal improvements although the current best one – Stochastic Mixup & CutMix – also supports their claims. & L1. The paper only provides the results when CutMix is better than Mixup, no converse case.\nWe first re-emphasize that the main goal of our paper is to understand various MSDAs from the optimization perspective and to provide a generalized loss function form. We also have shown that there is no one-fit-all solution for MSDA, resulting in proposing a hybrid version of popular MSDA methods, i.e., Mixup and CutMix. More specifically, the loss function for MSDA is determined by $a_{ij}(=E_M[(1-M_i)(1-M_j)])$, and we claim that the best $a_{ij}$ is problem and data dependent. In Table 1, we show when Mixup is better than CutMix and the converse case. Mixup performs better than CutMix when the longer distance pixels are relative (e.g., larger objects and smaller crop size). To address the reviewer’s concern, we additionally report HMix results in Table 1. The results support that our proposed method takes the advantages from both sides. This table is in the revised version of Appendix I.1. \n \n| | Mixup | CutMix | $\\Delta$ (CutMix - Mixup) | HMix ($r$=0.5) | HMix ($r$=0.75) |\n|------------------------|:------:|:------:|:-------------------------:|:----------------------:|:---------------------:|\n| Scenario 1: Large crop | 58.3 | **64.4** | +6.1 | **61.3 (+3.0 vs. Mixup)** | **63.7 (+5.4 vs. Mixup)** |\n| Scenario 2: Small crop | **67.7** | 67.0 | -0.7 | **67.6 (+0.6 vs. CutMix)** | **67.2 (+0.2 vs. CutMix)** |\n", " This paper extends the previous theoretical analysis on Mixup to all Mixup-based methods with data-agnostic mask selection methods. Inspired by the previous analysis on Mixup, this work also shows that Mixup-based methods help improve generalization and robustness performances. And this paper shows that Mixup-based methods can be viewed as the regularizer of input gradient and Hessian as well as the first layer parameters. From such a view, this work further investigates that different Mixup-based methods have different effects as the regularizer on input gradient and Hessian and hence there is no optimal choice for all the cases. Stand on that, this paper proposed two combinations of Mixup and CutMix that combine the advantages of both of them. Strengths:\n\n1. Extensive analysis to extend previous theoretical understanding of Mixup to most of the Mixup-based methods.\n2. Empirical results to support their theoretical findings.\n3. The overall flow is good, every time I have a question I see the answer.\n\nWeakness:\n\n1. The proposed new methods appear to have marginal improvements over current results, although the current best one - Stochastic Mixup & CutMix also supports their claims.\n2. Some of the parts could be written in a better manner, e.g., the explanation in Fig. 4. 1. I didn't find Theorem 4 in the Appendix. Do Theorem 3 (a) and (b) in the Appendix correspond to Theorem 4?\n\n2. For understanding Fig. 4, could you add the meanings of the x & y-axes and the color bar? Are the x & y-axes mean the distances between pixels along a specific dimension instead of the location of a pixel?\n 1. This paper claims that Mixup & CutMix are good for different cases. But it only provides the results of their proposed methods when CutMix is the better choice. It would be better to provide the case when Mixup is alternatively the better choice to further support that the proposed methods combine the advantages of both these two methods.\n\n2. The captions of figures could be more detailed.", " The authors develop and unified theory to explain the various data augmentation (DA) schemes used in Deep network training literature. Particularly, taking motivation from mixed sample strategies (MSDA) and cropping based approaches such as CutMix, the authors propose a loss formulation that considers any non-DA loss, and can be used to generalize various masking and mixing approaches to DA. They show that the CutMix and MSDA losses can then be interpreted through first- and second-order gradient regularization, thereby providing a framework to incorporate/develop other DA schemes. They propose two alternatives DA schemes, HMix and Mix, that result in improved performance of certain classification tasks. \n\nWhile I did not explore these results completely, the inclusion of results on n-sample DA in the Supplementary helps broaden the scope of the future work/applications of the proposed results in the paper. - While my expertise in DA is not specific to loss formulation and analysis topics covered in the paper, the results presented in the paper help improve the overall understanding of how various DA schemes affect the learning objective, which is clearly presented. \n - To the best of my knowledge, the proposed general loss function proposed is a significant contribution and should help the community analyze other DA schemes as well, those not considered in the paper. \n - The empirical results on HMix and GMix help validate the formulation developed.\n - The paper is generally well written, with a clear flow of thoughts and presentation, and is clear to read even for those with minimal expertise in the particular sub-field that the papers is about. \n - Minor Issue: The paper could use a bit of proofreading to fix typos here and there, eg. L147 can be benefit -> can be beneficial. \n\n==== x ==== x ==== x ====\n\nPost-Rebuttal comments: I have gone though the author's responses to my and the other reviewers' comments and as there is an overall consensus that the paper has good merit and is deserving of an acceptance, I will keep my score to accept the paper. While I understand the space constraints present, some points in the Theorem’s proof could still be discussed in the Main paper, to help better understand the results — I’m not sure if this a standard assumption in the field, but it seemed unclear to me why one would need $E_{r_x}[r_x] = 0$, or if there are any specific practical implications to having this assumption. Could we guarantee this on the datasets? \n\n I found the paper to be self contained and the authors seems to have sufficiently discussed most topics that would appear to be open ends for future work, such as explore scenarios with negative M in MSDA, etc. ", " This paper proposes a unified theoretical analysis of mixed sample data augmentation (MSDA). The theory shows that the MSDA training strategy can improve adversarial robustness and generalization of training. Based on these results, this paper proposes generalized MSDAs, a Hybrid version of Mixup and CutMix (HMix) and Gaussian Mixup (GMix), which are simple extensions of Mixup and CutMix. Where HMix unifies Mixup and Cutmix and uses the Beta distribution to determine the hybrid strategy hyperparameters, GMix is a smoother hybrid strategy. ### Strengths\n- The paper is well written and organized.\n- Detailed theoretical derivations support the ideas of the paper.\n\n### Weaknesses\n- Although the paper draws some conclusions through theoretical derivation, the proposed method is too similar to SmoothMix [1].\n- Some classical baselines, such as ManifoldMix [2], are missing from the experimental comparison methods. in addition, some new mixup methods of comparison are missing, such as SaliencyMix [3] (comparable in computational complexity to vanilla mixup), etc.\n- Some problems of theoretical derivation (see the questions for details).\n\n[1] Lee, Jin-Ha, et al. \"Smoothmix: a simple yet effective data augmentation to train robust classifiers.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020.\n\n[2] Verma, Vikas, et al. \"Manifold mixup: Better representations by interpolating hidden states.\" International Conference on Machine Learning. PMLR, 2019.\n\n[3] Uddin, A. F. M., et al. \"Saliencymix: A saliency guided data augmentation strategy for better regularization.\" International Conference on Learning Representations, 2021. - The advantage of the beta distribution in HMix is not explained, simply because the Beta distribution is more commonly used and the Beta distribution has a closed-form of conjugate representation for easy derivation?\n- The proposed strategies do not consider the label mismatch problem [4]; they both construct hybrid strategies by randomly selecting a box or a point. If the mixed samples and labels do not match, does it affect the theoretical derivation of conclusions?\n- Both proposed strategies do not take full advantage of the previous conclusions, for example, MSDA is data-dependent, but both hybrid strategies are data-independent.\n- The effect of hyperparameters such as \\alpha, r, \\lambda, etc. in the hybrid strategy is not fully explored.\n\n[4] Liu, Zicheng, et al. \"Automix: Unveiling the power of mixup.\" European Conference on Computer Vision, 2022. Based on previous theoretical works on mixup methods, this paper draws some interesting conclusions. However, the proposed method is very similar to the already published SmoothMix and the methods compared in the experimental section are not adequate. Therefore, I think this article is not up to the acceptance criteria.", " This paper proposes unified theoretical analysis of mixed sample data augmentation (MSDA) by extending Zhang et al. (2021) that analyzed theoretical model of Mixup. The authors show that MSDSA can improve performance by regularization effects in gradient and Hessian. Furthermore, they propose new augmentation methods, HMix and GMix, based on their theoretical analysis. Experimental results show that HMix and GMix outperform the previous MSDA methods in CIFAR-100 and ImageNet. Strengths: \n\n- This is a first theoretical study that proposes unified theoretical analysis of MSDA. The authors show theoretical reason why MSDA works. Furthermore, they show that Mixup and CutMix have different regularization effects in gradient and Hessian.\n\n- The proposed HMix and GMix are consistent with their theoretical analysis. HMix and GMix outperform the previous MSDA methods in CIFAR-100 and ImageNet.\n\n- The paper is clearly written.\n\nWeaknesses: \n\n- Theorem 1 requires second-order Taylor approximation. This paper does not consider the validity of this approximation.\n\n- The authors do not provide $\\alpha_{ij}$ of HMix, while they provide the one of Mixup, CutMix, and GMix.\n\n- The authors argue that potential negative societal impacts are discussed in the appendix. However, I cannot find the discussion in the appendix.\n 1. Is second-order Taylor approximation of loss function valid? Zhang et al. (2021) suggest that the approximation is valid for Mixup. Can their result be generalized to other MSDA?\n\n2. Can results of Theorem 1 are applied to other MSDA methods such as ResizeMix (Qin et al., 2020), Fmix (Harris et al., 2021), or PuzzleMix?\n\n- Qin, J., Fang, J., Zhang, Q., Liu, W., Wang, X., and Wang, X. (2020). Resizemix: Mixing data with preserved object information and true labels. arXiv preprint arXiv:2012.11101. \n- Harris, E., Marcu, A., Painter, M., Niranjan, M., Prügel-Bennett, A., and Hare, J. Fmix: Enhancing mixed sample data augmentation. ICLR, 2021.\n This paper does not consider online optimization of mask design $M$ or $N$. It may be an interested topic for future works." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 5, 4 ]
[ "VX5mjdjUVX", "jlS5xNckvzXo", "dRxBvFlHvvVi", "7tiJel2m9AY", "nips_2022_SLdfxFdIFeN", "IjVRAOK1Pbk", "lo9Dh4O6f03", "lo9Dh4O6f03", "ei5ImxsJN0p", "YwHPm7xRk0v", "YwHPm7xRk0v", "nips_2022_SLdfxFdIFeN", "nips_2022_SLdfxFdIFeN", "nips_2022_SLdfxFdIFeN", "nips_2022_SLdfxFdIFeN" ]
nips_2022_qf12cWVSksq
Inception Transformer
Recent studies show that transformer has strong capability of building long-range dependencies, yet is incompetent in capturing high frequencies that predominantly convey local information. To tackle this issue, we present a novel and general-purpose $\textit{Inception Transformer}$, or $\textit{iFormer}$ for short, that effectively learns comprehensive features with both high- and low-frequency information in visual data. Specifically, we design an Inception mixer to explicitly graft the advantages of convolution and max-pooling for capturing the high-frequency information to transformers. Different from recent hybrid frameworks, the Inception mixer brings greater efficiency through a channel splitting mechanism to adopt parallel convolution/max-pooling path and self-attention path as high- and low-frequency mixers, while having the flexibility to model discriminative information scattered within a wide frequency range. Considering that bottom layers play more roles in capturing high-frequency details while top layers more in modeling low-frequency global information, we further introduce a frequency ramp structure, i.e., gradually decreasing the dimensions fed to the high-frequency mixer and increasing those to the low-frequency mixer, which can effectively trade-off high- and low-frequency components across different layers. We benchmark the iFormer on a series of vision tasks, and showcase that it achieves impressive performance on image classification, COCO detection and ADE20K segmentation. For example, our iFormer-S hits the top-1 accuracy of 83.4% on ImageNet-1K, much higher than DeiT-S by 3.6%, and even slightly better than much bigger model Swin-B (83.3%) with only 1/4 parameters and 1/3 FLOPs. Code and models will be released.
Accept
This paper proposes a novel multi-branch style architecture for vision tasks, motivated by a frequency perspective of deep network behaviors. All reviewers are very positive about the motivation, presentation and experimental results. The AC believes this should be a good contribution to the neural architecture design community.
val
[ "SAcKXFzB3TS", "XgTNqIGgOWC", "WqcfbUdkvDs", "p5QEk6Cf0Ov", "x-IhQInmNkp", "jj9oE2rG2kC", "fKTBFqNONB", "mTWPy6FAs_h" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the insightful and constructive comments. Please find the response to your questions below: \n\n**Q1: The proposed method does not provide any results under LARGE configurations. It is necessary to present the comparisons to either Mobile-Former, CVPR 2022/MobileViT, ICLR 2022 or Swin-L/Focal-L/CSwin-L under the fair settings instead of only on an intermediate setting.**\n\n**Authors’ reply:** Thanks. Due to the limited rebuttal time, we only completed the comparison experiment of a tiny version of the proposed iFormer with Mobile-Former under similar parameters. iFormer-Tiny with 10.3M parameters achieves 80.2\\% accuracy, and improves over the accuracy 79.3\\% of Mobile-Former with 14M parameters by 0.9\\%. Considering the large computation cost, we will add the results of large configuration for comparisons in a future version. \n\nIn this paper, our goal is to disclose the defects of transformer from the perspective of the frequency domain and propose a simple and effective solution. We hope that while pursuing higher accuracy, this study will provide valuable insights for the community to design efficient and effective Transformer architectures.\n\n**Q2: The default multi-head self-attention is capable to learn different frequencies given enough training epochs. And previous efforts have shown inductive biases with either convolution or max-pooling show better training efficiency.**\n\n**Authors’ reply:** Thanks. Self-attention can indeed learn different frequencies given enough training, but it is highly capable of capturing low-frequencies in the visual data while not very powerful for learning high-frequencies, which is exactly what we claim in this paper. To prove this point, we visualize the Fourier spectrum of ViT or self-attention in Fig. 1 and Fig. 4 of the submitted manuscript. It is obvious that the Fourier spectrum of ViT or self-attention contains both high- and low-frequency information but the low-frequency information dominates.\n\nUnlike ConVit, EarlyConvolutions and MetaFormer stacking convolution and attention layers in a serial manner, iFormer uses an inception architecture to combine self-attention with convolution and max-pooling, simply and efficiently helping it capture more high frequencies and expanding the perception capability of the Transformer in the frequency spectrum. Moreover, the pioneering frequency ramp structure also proposed in our work enables an effective trade-off between high- and low-frequency components across all layers. Actually, the differences between iFormer and previous hybrid methods have been discussed in the \"Introduction\" and \"Related Work\" parts of the submitted manuscript. In experiments, we have also compared iFormer with these hybrid architectures on ImageNet-1K. \n\n**Q3: The authors should include the results with the combination of ``Attention and DWConv''. According to our previous experience, we can apply large kernel DWConv to achieve better performance than the naive MaxPool operation.**\n\n**Authors’ reply:** As per your suggestion, we conduct ablation study by removing each component from the network. We can see that iFormer-S with convolution, max-pooling and attention outperforms the combination of convolution and attention by 0.1\\%. Moreover, we use large kernel (5x5) to replace 3x3 kernel of DWConv. The results are summarized in the following Tables. We can see that the model with 3x3 kernel achieves better results. We believe that this is because the self-attention has learned the information that is learnt by large-kernel DWConv. Besides, the smaller kernel is more conducive and effective for capturing high-frequency information.\n\n| Attention | MaxPool | DwConv | Top-1(%, 100epoch) |\n|:---------:|:-------:|:------:|:--------:|\n| &#10004; | &#10004; | &cross; | 81.2 |\n| &#10004; | &cross; | &#10004; | 81.4 |\n| &#10004; | &#10004; | &#10004; | 81.5 |\n\n\n| Kernel | Top-1(%, 300epoch) | \n|:------:|:-----------------:|\n| 5x5 | 83.2 |\n| 3x3 | 83.4 |", " We thank the reviewer for the insightful and constructive comments. Please find the response to your questions below:\n\n**Q1: What’s the exact policy to change the channel ratio across layers? It’s linear? How do different policies perform (or do they perform differently at all)?**\n\n**Authors’ reply:** According to reference paper [59] as indexed in our submission, the dimension of each attention head is set to 32. We use the head number to compute the channel ratio $C_l / C$. Specifically, for iFormer-S, its four blocks respectively have channel dimensions $C=$ 96, 192, 320, 384 and attention head number 3, 6, 10 and 12. Then we approximately linearly scale $C_l / C$ by setting it as $1/3 (=20/60)$ in the $1$st bock, $1/2 (=30/60)$ in the $2$nd bock, $7/10 (=42/60)$ and $9/10 (=54/60)$ in the $3$rd bock, $11/12 (=55/60)$ in the $4$th bock. In addition to linear scaling, we have also tried cosine scaling to increase the channel ratio of low-frequencies, and it achieves 83.4\\% accuracy on iFormer-S which is the same accuracy as that of linear scaling. In addition to the manually defined strategies in the frequency ramp structure, neural architecture search can be used to automatically search for a balance ratio between the high- and low-frequency components, which is left as our future work.\n\n**Q2: In the low-frequency mixer, multi-head self-attention is made more efficient by using average pooling first and then upsampling. How much saving is obtained in compute? Will this hurt the attention performance too much**\n\n**Authors’ reply:** In this work, we propose the down- and up-sample structure to reduce the computational cost. When removing this structure, iFormer-S has 7.0G FLOPs, and achieves 83.6\\% top-1 accuracy. However, by adopting the down- and up-sample structure, iFormer-S gets a similar accuracy (83.4\\%) but has much fewer FLOPs (4.8 G). \n\n**Q3: How does the feature fusion module compares to direct concatenation [47] in performance?**\n\n**Authors’ reply:** Our feature fusion module for iFormer-S achieves 83.4\\% Top-1 accuracy, while the result of direct concatenation [47] is 83.0\\%. As the upsample operation results in excessive smoothness between adjacent tokens, we use a depthwise convolution to exchange information between patches, which elegantly overcomes this issue.\n\n**Q4: The paper should benefit from a detailed quantification of the high- and low-frequency information.**\n\n**Authors’ reply:** \\paragraph{Authors’ reply:} Thanks for your suggestion. In reference works [19,20], they analyzed the frequency via visualizing the Fourier spectrum. Following [19,20], we also visualize the Fourier spectrum of feature maps for different iFormer layers in Fig. 6 in the submitted manuscript. The visualizations show that our iFormer can effectively trade-off high- and low-frequency components across all layers. For better understanding how the Inception mixer fulfills the task in its low- and high-frequency branches, we will try more methods to further analyze the frequency information in the final version.\n", " We thank the reviewer for the insightful and constructive comments. Please find the response to your questions below:\n\n**Q1: I hope the author should provide more discussions on the core difference between iFormer and previous multi-branched network structure.**\n\n**Authors’ reply:** Thanks for your suggestion. The main differences between iFormer and the ResNeXt/Inception-alike CNN family are as follows: \n1) From the motivation, this work aims to disclose the problem of vanilla ViTs, and provides a solution to augment the perception capability of ViTs in the frequency spectrum via a multi-branched structure, while ResNeXt/Inception aims to improve the effectiveness and efficiency of CNNs.\n2) The proposed iFormer uses the multi-branched structure to flexibly balance the different frequencies in the representation so that the frequency ramp structure can effectively trade-off high- and low-frequency components across all layers. In contrast, the RexNeXt/Inception-alike CNN family does not consider the balance between different branches.\n3) For the inception token mixer, we simply use convolution and max-pooling as the high-frequency mixer, which can be replaced if the research community finds any other better token mixer to learn high-frequency information. Differently, the branches of RexNeXt/Inception-alike CNN family are fixed.\n\n**Q2: In addition, as the paper is largely driven by the low/high pass filter design in signal processing, I would encourage the author includes some references on this topic.**\n\n**Authors’ reply:** Thanks for your suggestion. We will add some references about this topic in the final version.\n\n**Q3: Problems on ablation study.**\n\n**Authors’ reply:** Thanks for your valuable suggestion. As per your suggestion, we conduct ablation study by removing the component (max-pooling or convolution) from the inception mixer. As seen from the following Table, combining attention with convolution and max-pooling can achieve the highest classification accuracy on ImageNet.\n| Attention | MaxPool | DwConv | Top-1(%) |\n|:---------:|:-------:|:------:|:--------:|\n| &#10004; | &#10004; | &cross; | 81.2 |\n| &#10004; | &cross; | &#10004; | 81.4 |\n| &#10004; | &#10004; | &#10004; | 81.5 |\n\n**Q4: How to increase the channel ratio $C_l / C$**\n\n**Authors’ reply:** According to reference paper [59] as indexed in our submission, the dimension of each attention head is set to 32. We use the head number to compute the channel ratio $C_l / C$. Specifically, for iFormer-S, its four blocks respectively have channel dimensions $C=$ 96, 192, 320, 384 and attention head number 3, 6, 10 and 12. Then we approximately linearly scale $C_l / C$ by setting it as $1/3 (=20/60)$ in the $1$st bock, $1/2 (=30/60)$ in the $2$nd bock, $7/10 (=42/60)$ and $9/10 (=54/60)$ in the $3$rd bock, $11/12 (=55/60)$ in the $4$th bock. \n\n**Q5: Down-sample and Up-sample Self-attention.**\n\n**Authors’ reply:** In this work, we propose the down- and up-sample structure to reduce the computational cost. When removing this structure, iFormer-S has 7.0G FLOPs, and achieves 83.6\\% top-1 accuracy. However, by adopting the down- and up-sample structure, iFormer-S gets a similar accuracy (83.4\\%) but has much less FLOPs (4.8 G). \n", " We thank the reviewer for the insightful and constructive comments. Please find the response to your questions below:\n\n**Q: Have you tried other balancing strategies in the proposed frequency ramp structure?**\n\n**Authors’ reply:** As shown in Table 5, we have tried three balancing strategies, including increasing, invariant and decreasing structures for the low-frequency component. The results reveal that the increasing structure outperforms the other two structures. For the increasing structure, we linearly increase the channel dimensions to low-frequency mixer. In addition, we have also tried cosine scaling to increase the channel ratio of low-frequencies, achieving 83.4% accuracy on iFormer-S, the same accuracy as that of using linear scaling. It is worth mentioning that beyond the manually defined strategies in the frequency ramp structure, neural architecture search can be used to automatically search for a balance ratio between high- and low-frequency components, which is left as our future work.\n\n**Q2: Can you share some insights on the future work that can be derived from this work?**\n\n**Authors’ reply:** This work mainly shows that well-balancing the low- and high-frequency information in data can benefit the performance on many tasks, e.g. classification and detection. Accordingly, any solution towards better learning of low- and high-frequency information should be expected for better performance. In our opinion, there are at least three directions to explore. \n1) In this work, we simply apply the convolution operation and max-pooling to learn the high-frequency components with a parallel structure. In the future, a further optimized architecture of high-frequency token mixer will be explored for capturing high-frequency representations. \n2) Human visual system extracts visual elementary features at different frequencies and interacts the information between different frequencies. This gives us an inspiration that mutually interacting high- and low-frequency representations may benefit the performance.\n3) It has been shown that the adaptive attention has the ability to capture both high- and low-frequencies. We then consider using a regularization method to help improve the ability of attention to learn high-frequency information. \n\n**Q3: Do you plan to make your code publicly available for follow-up research?**\n\n**Authors’ reply:** Yes, we will release the code and models soon.\n\n**Q4: Denote the symbols in Fig.3 and the punctuations for Eq.1 and Eq.5 are missing.**\n\n**Authors’ reply:** Thanks for your careful proofreading. We will revise these typos and also further refine the paper in the final version.\n", " In this paper, the authors propose a novel iFormer architecture to capture both the high and low frequencies in visual data. Motivated by the observation that ViT tends to capture few high-frequency signals (i.e., showing the characteristics of low-pass filters), the authors propose an Inception token mixer, comprising high- and low-frequency mixers to extract the corresponding frequency information on the feature channels. Also, derived from the finding that lower and higher layers generally need more local and global information, respectively, the authors devise a frequency ramp structure to trade off different-frequency components across all the layers. Experiments across various tasks including classification, detection, and segmentation, convincingly validate the effectiveness of the proposed iFormer with detailed explanations and discussions。 [Strengths]\n- The proposed iFormer is very well-motivated. Deriving from the results of the Fourier spectrum and the relative log Fourier amplitudes, the authors accordingly develop the iFormer backbone to enhance the perception capability of the Transformer in the frequency spectrum. \n- The experiments are comprehensive and convincing. Results across three challenging tasks show that the proposed iFormer outperforms existing Transformer architecture. Extensive ablation studies and visualizations also demonstrate the effectiveness of the devised two key modules of the Inception token mixer and the frequency ramp structure.\n- The paper is well-written and easy to follow. I enjoy the writing style that starts from the detailed discussions and analysis on the derived observations and then accordingly elaborates the proposed method.\n\n[Weaknesses]\n- The authors are suggested to denote the symbols like X_{h1}, X_{h2}, Y_{h1}, and Y_{h2} in Fig. 3, which will make it much easier to relate Fig. 3 back to the text.\n- The design of a frequency ramp structure that uses a channel ratio to balance the high- and low-frequency components seems straightforward. I wonder if the authors have tried other designs for frequency information balancing and how they perform.\n- It will be great if the authors could share some insights on the future work that can be derived from this work.\n- The punctuations for Eq. 1 and Eq. 5 are missing.\n - Have you tried other balancing strategies other than the manner of setting the sum of the ratios to 1 in the proposed frequency ramp structure?\n- Can you share some insights on the future work that can be derived from this work?\n- Do you plan to make your code publicly available for follow-up research?\n\n Yes, limitations have been discussed. I do not find the potential negative societal impact of this work.", " This paper designs a new family of multi-branched visual transformer models, called Inception Transformer. Motivated by the spectrum properties of local operations (convolution & pool) and long-range operations (self-attention), a multi-branched Inception Mixer block is proposed to capture both high/low frequency information from the images. A channel splitting mechanism is applied to adopt hybrid operation at each stage. \nIn addition, the author assumes that low layers of network favor local/high-frequency pattern and high layers prefer global/low-frequency pattern. Therefore, a frequency ramp structure is utilized to increase the attention dimensions and shrink the convolution dimensions as the network goes deeper.\nComprehensive evaluation on image classification, object detection, instance segmentation and semantic show strong performance. Overall, I enjoy reading this paper, with straightforward motivation and clear writing.\nStrengths:\n1.\tStrong motivation. As the self-attention operation captures the long-range dependency (shown in Figure 1(a)(b)), a local branch with convolution and pooling operations is well-motivated to compensate the frequency preference of attention operation. \n2.\tSimple and efficient structure. Instead of applying different operations on the same feature maps, the high/low frequency mixer is applied on different channel slices in a parallel manner. Also, the structure that gradually increases the global operation channel makes sense to me\n3.\tPromising empirical results. On ImageNet, iFormer-S/iFormer-B surpass the current best performed architecture by 0.5 %/0.7% in accuracy on 224 resolution. The iFormer-B achieves 48.3 mAP on COCO2017 val and 48.6 mIOU on ADE20K. All results are surprisingly well.\n\t\nWeakness:\n1.\tLack of discussion with previous multi-branched network structure. Since the current paper focus on the design of a multi-branched block structure, a detailed comparison and discussion should be included, like ResNeXt[1], GoogleNet Family and Inception Family. Although they did not include self-attention in their branches, some of the networks do have the mixed kernel size (1x1 Vs 5x5), which is also another form of low/high frequency mixer. Also, the multi-branched network seems to have better generalization and optimization properties [2] compared with the single-branched counter parts. A simple citation is insufficient to highlight your improvement over them. In addition, as the paper is largely driven by the low/high pass filter design in signal processing, I would encourage the author incudes some reference on this topic.\n2.\tProblems on ablation study. To be scientific, ablation study means quantifying the influence of each modular design by removal/changing of one component from the entire system. But in table 5 row 1-4, the authors are accurately adding one component at each time. I know that a lot of paper take this style of ablation study, but scientifically, this is not a valid ablation study. Better fix this. \n\n[1] Aggregated Residual Transformations for Deep Neural Networks (CVPR 2017)\n[2] Deep Neural Networks with Multi-Branch Architectures Are Intrinsically Less Non-Convex (ICML 2019) 1.\tI hope the author should provide more discussions on the core difference between iFormer and previous multi-branched network structure.\n2.\tFrequency ramp strategy. In the “Frequency ramp structure” section, no description on the how to increase the channel ratio C_l/C. Is it linear scaling according to the depth or exponential increasing? I see the channel number in the appendix, but it seems to be a “magic number” that I am not sure how did you decide on the current number. Moreover, an ablation on how to select the channel number should be included in the experimental part. \n3.\tDown-sample and Up-sample Self-attention. To reduce the computational cost, the author proposes in Equation 7 to (1) down-sample the feature map with average pooling (2) Do self-attention (3) Up-sample to the full size. This operation is not mentioned in previous studies. I am curious about its computational complexity and performance, with the vanilla ViT structure. Please put more discussions on societal impact.", " This paper proposes Inception Transformer (iFormer) to enhance the low-pass filters in standard transformers. Specifically, the authors introduce an Inception mixer that has parallel convolution/max-pooling path and self-attention path, capturing high- and low-frequency information respectively. The Inception mixer differs from existing parallel CNN/attention hybrids in two aspects: 1) Unlike [23, 26, 47] that process all channels in each branch, the Inception mixer adopts a channel splitting mechanism to reduce information redundancy. 2) The Inception mixer also contains a fusion module to merge the outputs from low- and high-frequency branches, instead of simple feature concatenation.\n\nAnother novelty of the proposed iFormer is the frequency ramp structure. It trade-offs high- and low-frequency info across different layers by gradually decreasing the channel dimensions fed to the high-frequency mixer and increasing those to the low-frequency mixer.\n\nThe full iFormer achieves SOTA performance on a series of vision tasks, including image classification, object detection and segmentation. It shows better performance-cost trade-off than representative ViTs, CNNs and their hybrid variants, showing the potential to serve as a general transformer backbone. Strengths:\n+ It's new to consider enhancing the capability of transformers to capture high-frequency information.\n+ The two key novelties--Inception mixer and frequency ramp structure--are intuitive and make sense. There are ablations provided to support their positive roles.\n+ Although the Inception mixer has a parallel structure that is similarly adopted by existing papers, there are still technical differences (ie. channel splitting and fusion module) that prove useful.\n+ Strong performance is obtained on various vision tasks, and that comes without sacrificing model efficiency.\n\nWeaknesses:\n- The paper should benefit from a detailed quantification of the high- and low-frequency information (e.g. via Fourier analysis) captured for each layer. Is there any interesting pattern across layers? This is the core contribution of the whole paper, but the analysis about how Inception mixer fulfills the task in its low- and high-frequency branches is missing (example qualitative visualization in Fig. 4 may not be enough).\n- As mentioned in the limitations, the channel splitting scheme is lacking analysis and/or ablations. - In the low-frequency mixer, multi-head self-attention is made more efficient by using average pooling first and then upsampling. How much saving is obtained in compute? Will this hurt the attention performance too much (although I get the intention is to focus on embedding global information in the attention branch)?\n- What's the exact policy to change the channel ratio across layers? It's linear? How do different policies perform (or do they perform differently at all)?\n- How does the feature fusion module compares to direct concatenation [47] in performance? Both the limitations and potential negative societal impact are given. It makes sense to bring up the limitation that iFormer requires manually defined channel ratio in the frequency ramp structure.", " This paper presents an iFormer architecture to capture high-frequency and low-frequency visual information. The proposed inception mixer module (i) splits the feature maps along the channel dimension and (ii) applies parallel convolution/max-pooling/self-attention paths to gather high-frequency and low-frequency information.\nBesides, the authors also introduce a simple frequency ramp structure that adjusts the channel partitions across different layers. The experimental results look encouraging. > Strengths\n\n👍 The motivation is clear and the presented idea is very simple and easy to follow.\n\n👍 The proposed iFormer shows slightly better performance while enjoying smaller GFLOPs.\n\n> Weaknesses\n\n👎 The proposed method is TOO NAIVE & lacks NOVELTY. According to Figure 3 and Figure 4, we can see that these three different branches capture information of different frequencies. However, the default **multi-head self-attention is capable to learn such information given enough training epochs** and previous efforts ([ConVit, ICML 2021](https://arxiv.org/abs/2103.10697), [EarlyConvolutions, NIPS 2021](https://proceedings.neurips.cc/paper/2021/file/ff1418e8cc993fe8abcfe3ce2003e5c5-Paper.pdf), [MetaFormer, CVPR 2022](https://openaccess.thecvf.com/content/CVPR2022/papers/Yu_MetaFormer_Is_Actually_What_You_Need_for_Vision_CVPR_2022_paper.pdf))have shown that **injecting the inductive biases with either convolution or max-pooling show better training efficiency**.\n\n👎 The proposed method does not provide any results under LARGE configurations. The authors should justify the position of the proposed iFormer on either a TINY configuration that pursues faster running speed & better efficiency or a LARGE configuration that pursues higher accuracy. However, the authors fail to show the proposed method is promising in both aspects. Therefore, it is necessary to present the comparisons to either [Mobile-Former, CVPR 2022](https://arxiv.org/pdf/2108.05895.pdf)/[MobileViT, ICLR 2022](https://openreview.net/pdf?id=vh-0sUt8HlG) or Swin-L/Focal-L/CSwin-L under the fair settings instead of only on an intermediate setting that the community does not care.\n\n👎 The authors should include the results with the combination of ``Attention and DWConv''. According to our previous experience, we can apply large kernel DWConv to achieve better performance than the naive MaxPool operation. Please carefully address the above-listed weaknesses. I will increase the ratings if the proposed method shows advantages under either the TINY or LARGE configuration.\n\n> Update:\n\nAfter reading the comments of other reviewers and the authors' responses carefully, I tend to increase my ratings due to the potential high impact on the community. Yes." ]
[ -1, -1, -1, -1, 7, 8, 8, 7 ]
[ -1, -1, -1, -1, 4, 4, 5, 5 ]
[ "mTWPy6FAs_h", "fKTBFqNONB", "jj9oE2rG2kC", "x-IhQInmNkp", "nips_2022_qf12cWVSksq", "nips_2022_qf12cWVSksq", "nips_2022_qf12cWVSksq", "nips_2022_qf12cWVSksq" ]
nips_2022_luGXvawYWJ
Dataset Distillation via Factorization
In this paper, we study dataset distillation (DD), from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline. Unlike conventional DD approaches that aim to produce distilled and representative samples, \emph{HaBa} explores decomposing a dataset into two components: data \emph{Ha}llucination networks and \emph{Ba}ses, where the latter is fed into the former to reconstruct image samples. The flexible combinations between bases and hallucination networks, therefore, equip the distilled data with exponential informativeness gain, which largely increase the representation capability of distilled datasets. To furthermore increase the data efficiency of compression results, we further introduce a pair of adversarial contrastive \xw{constraints} on the resultant hallucination networks and bases, which increase the diversity of generated images and inject more discriminant information into the factorization. Extensive comparisons and experiments demonstrate that our method can yield significant improvement on downstream classification tasks compared with previous state of the arts, while reducing the total number of compressed parameters by up to 65\%. Moreover, distilled datasets by our approach also achieve \textasciitilde10\% higher accuracy than baseline methods in cross-architecture generalization. Our code is available \href{https://github.com/Huage001/DatasetFactorization}{here}.
Accept
The reviewers originally had concerns but these have been well addressed by the authors in a thorough rebuttal and there is a consensus for acceptance. We encourage the authors to incorporate all the comments from the reviewers in the final version.
test
[ "0LtvIVYSrXT", "bqykcZd_IC", "W8qR1qGYvTc", "Or-Bxa-d3Z", "Vlm7O_S_0G", "z0HfmKk-4RU", "OmEFD2NrCWD", "HGs-UzXzXJE", "2iU8l6tmr1g", "lbhXrLFICIH", "N6z1PnHylF", "aI54MNdSh8b", "B6fUHF4QaxI", "X4Cr4MqhiL", "kY8uAotxHf7", "FJiyiR5tvuk", "n2TiQRKblk-", "9SSMEuPwJA", "b1Vvxqx-9Ao", "Bj8AZEglkxx", "WRkI5WaxMWF", "quiqq_LX7V1" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. I appreciate that the authors could conduct such an ablation study. It seems that in the small-budget region, the number of bases dominates the performance, while in the large-budget region, the expressivity of the hallucinators dominates the performance. If my understanding is correct, most parameters in the current HaBa are in the bases, which limits the expressive power. It seems to suggest that increasing the expressive power of the hallucinators can be a fruitful future direction when we want to distill tens of thousands of data. I would like to see the authors explore more in this direction in the future. I am satisfied with the author's efforts in the discussion phase so I will increase my score to 7 (Accept).", " We sincerely thank the reviewer for the further question on the scalability. Indeed, under the framework of hallucinator-basis factorization, there are many factors that affect the performance. Given a fixed storage budget, how to scale the bases and hallucinators is an important topic. Among all the factors, we empirically find that the depth of hallucinators, the number of hallucinators, the number of channels in each basis, and the number of bases are the most important ones, which will be studied in the following exploration. Here, we consider three types of storage budget: small, medium, and large, corresponding to the cost of IPC=2, 11, and 51 for the baseline method respectively. We consider cases of 1 and 2 convolution blocks for the depth of hallucinators, 2 and 5 for the number of hallucinators, and 1 and 3 for the number of channels in each basis. For each setting, we adjust the number of bases to fit the given budget. The detailed configurations and results are as follows:\n\n| Storage Budget | Depth of Ha. | # of Ha. | # of Channels in Ba. | # of Ba. | Accuracy |\n| :------------: | :----------: | :------: | :------------------: | :------: | :----------------: |\n| Small | 1 | 2 | 1 | 5 | 56.57$\\pm$0.10 |\n| Small | 1 | 2 | 3 | 2 | 51.91$\\pm$0.73 |\n| Small | 1 | 5 | 1 | 3 | 55.66$\\pm$0.29 |\n| Small | 1 | 5 | 3 | 1 | 48.26$\\pm$0.84 |\n| Small | 2 | 2 | 1 | 4 | **62.02$\\pm$0.35** |\n| Small | 2 | 2 | 3 | 1 | 58.44$\\pm$0.18 |\n| Small | 2 | 5 | 1 | 1 | 60.96$\\pm$0.33 |\n| Medium | 1 | 2 | 1 | 32 | 72.11$\\pm$0.10 |\n| Medium | 1 | 2 | 3 | 11 | 69.02$\\pm$0.30 |\n| Medium | 1 | 5 | 1 | 30 | 70.55$\\pm$0.38 |\n| Medium | 1 | 5 | 3 | 10 | 70.27$\\pm$0.63 |\n| Medium | 2 | 2 | 1 | 31 | 71.74$\\pm$0.11 |\n| Medium | 2 | 2 | 3 | 10 | **73.76$\\pm$0.10** |\n| Medium | 2 | 5 | 1 | 28 | 72.47$\\pm$0.21 |\n| Medium | 2 | 5 | 3 | 9 | 71.76$\\pm$0.22 |\n| Large | 1 | 2 | 1 | 152 | 73.07$\\pm$0.20 |\n| Large | 1 | 2 | 3 | 51 | 74.59$\\pm$0.32 |\n| Large | 1 | 5 | 1 | 150 | 73.26$\\pm$0.37 |\n| Large | 1 | 5 | 3 | 50 | 74.04$\\pm$0.16 |\n| Large | 2 | 2 | 1 | 151 | 73.06$\\pm$0.23 |\n| Large | 2 | 2 | 3 | 50 | 73.26$\\pm$0.42 |\n| Large | 2 | 5 | 1 | 148 | 73.76$\\pm$0.27 |\n| Large | 2 | 5 | 3 | 49 | **75.44$\\pm$0.22** |", " We provide **a more illustrating visualization** in the revised supplement and have the following observations:\n\n1. **For all three types of budget, the best performance is achieved by using deeper hallucinators. Especially under small and medium budgets, using depth 2 can outperform using depth 1 almost consistently.** This can be explained by the more complex sample-wise relationship extracted by hallucinators.\n2. In our framework, bases are expected to store sample-independent information while hallucinators are used to encode shared relationships across all the samples. **When the budget is small, using 1-channel bases can achieve significantly better results.** This is because a small storage budget would rely more on increasing the number of independent data samples for better diversity. The informativeness of each basis appears less important.\n3. **When the budget increases, the advantage of 1-channel bases mentioned before would diminish gradually. Especially under large budgets, 3-channel bases outperform 1-channel ones consistently.** The reason is that when the number of bases is adequate, focusing on the informativeness of each basis can produce more benefits than increasing the number.\n4. **When the budget is large, using more hallucinators can yield slightly better results,** which can probably be attributed to the further improvement in the diversity.\n5. **The larger the budget is, the less insensitive the performance is, to different configurations.** \n\nNote that the above exploration is conducted without taking the downstream training speed into consideration, which is also an important metric in the task of dataset distillation. Our opinion on the scalability is that, when downstream training overhead is not an issue, deeper hallucinators are recommended for better performance; otherwise if downstream efficiency is desired, we find that 1 nonlinear block is sufficient since heavier hallucination networks can result in nonnegligible latency, especially when the total number of images is large.\n\nWe would like to thank the reviewer again, and we hope these discussions can bring some insights regarding the scalability.", " Thanks a lot for the update and running the experiments! I'm really impressed by how fast the results are produced :) They also further prove the usefulness of HaBa (esp. CL). \n\nIn the original draft, the motivation for HaBa was not really convincing to me. Part of the paper (1st half) argues about some shared knowledge among samples. If the experiments are to support this claim, then they should really highlight how the parametrization is helpful when total #images is held constant. Yet 2nd half of the paper (experiments) focuses on how results are improved with similar #parameter, which would be fine if the goal is to reduce #parameters (after justifying this goal).\n\nThis mismatch led to my confusion on the motivation and importance of the paper. I would sincerely appreciate if the authors could think more about how to better present the work in a more consistent manner. \n\nThat said, the authors' replies and results have clarified my confusion in the rebuttal period. I now better understand the motivation (sharing knowledge) and am happy to see that the \"side effect\" (lower #parameter/image) also improve CL. Thus, I have raised my score further. ", " Many thanks to the author's responses. My concerns have been addressed.", " We sincerely thank the reviewer fSqe for the reply and are very happy to see that most concerns have been addressed. Here we would also like to have following discussions on the mentioned questions:\n\n1. **How is the IDC work [1] related to the present paper?**\n\n * Thanks for bringing a concurrent work to our attention. Indeed, the paper of IDC was submitted to arXiv after the submission deadline of NeurIPS 2022. Nevertheless, we would love to provide a discussion and comparison here and in the revision.\n\n * For the contribution on the parameterization part in IDC, the work [1] reveals that only storing down-sample version of synthetic images and conducting bilinear upsampling in downstream training would not hurt the performance much. Thus, given the same budget of storage, it can store $4\\times$ number of $2\\times$ down-sample synthetic images compared with the baseline.\n\n * According to the definition of our hallucinator-basis factorization, IDC can in fact be treated as a special case of HaBa, where the hallucinator is a parameter-free upsampling function and each basis has a smaller spatial size.\n\n * The main focuses for IDC and HaBa are however different. For IDC, the core is to reduce the spatial size for efficient parameterization. For HaBa of this paper, instead, we do not modify the spatial size of bases in the default setting for better explainablity and intuitive comparisons with the baselines.\n\n * Thus, IDC and HaBa are in fact two orthogonal techniques and they can readily join force to enhance the baseline performance. Here, we try using the technique of IDC and adopting $2\\times$ down-sample synthetic images on the baseline MTT, based on which we further consider adding our HaBa and involving 5 hallucinators. The results are as follows:\n\n | Avg. # of Param / Class | 2$\\times$32$\\times$32$\\times$3 | 11$\\times$32$\\times$32$\\times$3 | 51$\\times$32$\\times$32$\\times$3 |\n | :---------------------: | :----------------------------: | :-----------------------------: | :-----------------------------: |\n | Baseline | 49.89$\\pm$0.95 | 65.92$\\pm$0.62 | 70.73$\\pm$0.52 |\n | Baseline+IDC | 56.13$\\pm$0.38 | 70.85$\\pm$0.43 | 71.01$\\pm$0.41 |\n | Baseline+IDC+ours | 61.27$\\pm$0.34 | 72.14$\\pm$0.22 | 75.31$\\pm$0.27 |\n\n We can observe that with the efficient parameterization of IDC, the performance of baseline can be improved. With HaBa in this paper, the performance can be further improved a lot: 5.14%, 1.29%, and 4.30% in the three settings respectively, which demonstrates that IDC and HaBa work in different ways.\n\n2. **Applications in continual learning.**\n\n * We appreciate the reviewer for pointing out the setting of continual learning (CL), an application that can potentially demonstrate the advantage of the proposed solution further. \n\n * Following the setting of DM [2], we conduct experiments on the CL setting of CIFAR-100, with 20 random classes per stage. The average number of parameters per class is $20\\times32\\times32\\times3$. The synthetic datasets are trained using ConvNet. The accuracy results for our method and the DM baseline on the same ConvNet architecture are as follows:\n\n | # of Class | 20 | 40 | 60 | 80 | 100 |\n | :--------: | :--: | :--: | :--: | :--: | :--: |\n | Baseline | 57.3 | 50.8 | 46.5 | 42.3 | 38.8 |\n | Ours | 63.4 | 56.9 | 52.0 | 47.2 | 43.7 |\n | Gain | +6.1 | +6.1 | +5.5 | +4.9 | +4.9 |\n\n On ResNet-18 architecture, the results are as follows:\n\n | # of Class | 20 | 40 | 60 | 80 | 100 |\n | :--------: | :---: | :---: | :---: | :---: | :---: |\n | Baseline | 48.7 | 39.9 | 34.7 | 30.8 | 27.2 |\n | Ours | 59.6 | 52.6 | 47.0 | 42.4 | 38.6 |\n | Gain | +10.9 | +12.7 | +12.3 | +11.6 | +11.4 |\n\n Above results demonstrate that the proposed method can indeed increase the informativeness of synthetic datasets and thus produce significantly better performance, especially on the cross-architecture setting.\n\nWe have included these discussions and results in the revised supplement. Thanks again for the constructive suggestions. Sincerely hope that our response clarifies the reviewer's questions.\n\n***\n\n[1] Dataset Condensation via Efficient Synthetic-Data Parameterization, Kim et al., ICML 2022\n\n[2] Dataset Condensation with Distribution Matching, Zhao et al., arXiv, 2021", " Thanks for your clarification. I have another question regarding the scalability.\n\nI can see that many factors can affect the final performance, such as 1) width, depth, and the number of hallucinators, 2) size and the number of bases, 3) architecture of encoder and decoder, 4) the way to share a part of encoder or decoder. Besides, many trade-offs are going on among accuracy, speed, and the number of parameters. The authors have provided several ablation studies on each component independently, and all results agree with the expectation - \"the larger, the better, and a diminishing return is going on.\" But it is unclear what is the best joint configuration given a fixed storage budget. I want to gain some insights on how the best configuration to achieve the best accuracy varies as we increase the storage budget. In other words, what is the best way to scale the bases and hallucinators? What are the dominant factors when the storage budget is small (large)? And when does the transition phase happen? To simplify the problem, let us ignore the speed and only focus on the accuracy and the number of parameters. How do we choose the width, depth, and the number of encoders (decoders)? How do we choose the number and the dimension of the bases? (I separate the encoder and decoder as you show that it can achieve good performance even without encoders). I think this problem is very important for practitioners if they want to use this new parameterization, and I feel like the current 1 Conv-ReLu block may not be the optimal choice.", " Thanks for the response! I really appreciate the updates (esp. the credit assignment changes), answers and new results! It cleared up most of my confusions and I have raised my score accordingly. I have a few more questions:\n\n1. [1] can be also viewed as a DD compression work, where a distilled dataset is compressed to be parametrized with smaller parameter counts. ~Would the authors consider comparing with them?~\n\n\n EDIT: after realizing that code of [1] is only released a few days before NeurIPS deadline, I don't think that it was reasonable that I asked for a comparison. However, I am curious to hear the authors' thoughts on how works like [1] are related to the present paper.\n\n2. Reducing distilled dataset #parameters can be very important in continual learning applications. Prior works (e.g., [2,3]) show that storing distilled dataset for each task can help reduce catastrophic forgetting. Here the actual storage size is important and the proposed HaBa can potentially be very useful. Would the authors consider doing such experiments? I would imagine that HaBa can lead to important gains, which can better justify the focus on storage size (#parameter).\n\n[1] Jang-Hyun Kim, Jinuk Kim, Seong Joon Oh, Sangdoo Yun, Hwanjun Song, Joonhyun Jeong, Jung-Woo Ha, and Hyun Oh Song. Dataset condensation via efficient synthetic-data parameterization. arXiv preprint arXiv:2205.14959, 2022.\n\n[2] Lee, Saehyung, Sanghyuk Chun, Sangwon Jung, Sangdoo Yun, and Sungroh Yoon. \"Dataset Condensation with Contrastive Signals.\" arXiv preprint arXiv:2202.02916 (2022).\n\n[3] Zhao, Bo, and Hakan Bilen. \"Dataset condensation with differentiable siamese augmentation.\" In International Conference on Machine Learning, pp. 12674-12685. PMLR, 2021.", " We sincerely thank the reviewer for the further question. The baseline is MTT. The difference between the two results is on the **channel-independent basis** or **channel-dependent basis** adopted. In this paper, the shape of a basis is exact the same as that of an image. In other words, the number of channels in a basis is 3. Here, for the BPC=1 setting (IPC=BPC+1=2 for baseline), we consider channel-independent bases, to send each channel of bases to hallucinators independently, which is equivalent to 1-channel bases. In Tab. 1 of the main paper, to compare with previous baselines at the same setting, however, we do not adopt such manner and use 3-channel bases. In both cases and the baseline, the numbers of total parameters are the same. The difference between channel-independent bases and channel-dependent ones is analyzed in Fig. 7 of the main paper. We have also updated the aforementioned experimental results to provide the results for both channel-independent and channel-dependent settings. We hope our response clarifies the reviewer's question.", " What is \"baseline\"? Is it MTT? Why does \"Ours\" achieve much better performance than the result in Table 1 for one image per class setting? What changes have been made?", " We appreciate the reviewer jG3P's efforts on the constructive feedback and are glad that the reviewer finds our work novel and the experimental results convincing. The questions are fully addressed as follows:\n\n1. **Can the compressed data be used for KD methods?**\n\n * Thank you for your question. The idea of dataset distillation is indeed inspired by the knowledge distillation. Nevertheless, they are orthogonal techniques which can be applied jointly: to compress both model and data. We conduct an experiment on CIFAR-10 dataset with 10 bases per class:\n\n | | w/o KD | w. KD Teacher on Real | w. KD Teacher on Synthetic | w. KD Both Teachers |\n | :------: | :----: | :-------------------: | :------------------------: | :-----------------: |\n | Ours | 69.48 | 69.70 | 69.82 | 70.03 |\n | Baseline | 63.90 | 64.43 | 64.57 | 64.95 |\n\n Here, \"Teacher on Real\" means using a network trained on the real dataset as the teacher, \"Teacher on Synthetic\" means using a network trained on the synthetic dataset as the teacher, and \"Both Teachers\" means including both networks trained on the real dataset and the synthetic dataset respectively as the teacher. The teacher and student networks have 4 and 3 convolution blocks respectively.\n\n2. **The effect of model size on the final performance.**\n\n * Thank you for the insightful and interesting question. We conduct an experiment on the CIFAR-10 dataset with 10 bases per class. The model used for dataset distillation is a standard convolutional neural network with 3 convolution blocks and we test the across-architecture performance on ResNet18, ResNet50, and ResNet101.\n\n | | ResNet18 | ResNet50 | ResNet101 |\n | :----------: | :------: | :------: | :-------: |\n | Ours | 57.97 | 31.29 | 25.66 |\n | Baseline | 45.05 | 22.19 | 16.74 |\n | Real Dataset | 91.74 | 90.74 | 90.01 |\n\n We find that the gap of performance with the compressed dataset and the real dataset is increasing with the growth of model size. We think that it would become tough for the synthetic dataset with such a high compression ratio to train up a huge model with significantly more parameters, which is a challenge for dataset distillation.", " We sincerely thank the reviewer fSqe for the pertinent feedback and are happy that the reviewer finds our method achieves better dataset distillation performance in terms of the same parameter count. There are two major concerns:\n\n1. Using parameter count or the number of final images as the evaluation metric;\n2. Referring to the task as Dataset Distillation (DD) [1] or Dataset Condensation (DC) [2].\n\nWe would like to address the 2nd concern firstly since it is not related to the technical part. We agree that the two names refer to the same task and the concept of the task is introduced by the DD paper initially. We apologize for the improper credit assignment, and we did not intend to do so but merely followed the naming convention of previous works. We have now revised the paper, and have changed our title to “Dataset Distillation via Factorziation”, to highlight the credit of DD. Moreover, we have revised various parts of the manuscript, including abstract, introduction, and related works as suggested, to clarify the relationship between DD and DC, and to acknowledge the pioneering contribution of DD. Please refer to the Rebuttal Revision. We sincerely appreciate the reviewer bringing this to our attention. We hope that in this version the relationship between DD and DC is clearly clarified. \n\nFor the 1st concern, our opinion is that **parameter count is a reasonable metric for the area of dataset distillation under the setting of comparison in this paper**. We have the following reasons:\n\n* The motivations of dataset distillation are two-fold: to **alleviate the burden on storage** [2] and **speed up the training of downstream models** [1]. Let us discuss them respectively:\n 1. The cost of storage can be reflected directly by the number of parameters. What users actually care about is how to use lowest storage to obtain the most information, instead of how many images the dataset has.\n 2. The proposed method does not increase the time cost of training downstream models. Actually, in all the comparisons, the dataset size is set as the number of bases while hallucinators are treated as a kind of parameterized data augmentation, which works online in downstream training just as general data augmentation techniques. In this sense, **the actual dataset size for one epoch does not increase**. In other words, we rigorously control the training **iterations** and **batch size**, used by our method and baseline methods the same. Moreover, since the hallucination networks used in this paper are lightweight, the time cost of online image composition is actually close to general data augmentation techniques (**140.11 v.s. 142.86 epochs per second** on CIFAR-10 with 10 bases / images per class). These facts indicate that our method can improve the performance significantly given less cost of storage and almost the same downstream training time. We have clarified this in Line 226 to 229 of the revised version.\n* Using the number of final images as the evaluation metric, unfortunately, indeed suffers from significant drawbacks:\n 1. If we consider data augmentation, one of the simplest techniques, horizontal flip, can make the number of images double. If other techniques like random rotation, random shift, and color jitter, the number of images becomes infinite. In this sense, the size of distilled datasets in almost all the previous methods [3,4,5,6] on dataset distillation are infinite, which becomes weird.\n 2. There may be many trivial solutions if only the number of images is used as the evaluation metric. For example, we can concatenate all the images in a dataset to a single one to form a huge image. In this case, we get a dataset whose size is only 1.\n\nThus, we believe that parameter count is truly reasonable to be adopted as the main comparison metric, which is also the common comparison scheme in recent works focusing on the parameterization of the distilled data [7,8].", " If we can reach an agreement on the metric, we think most concerns could be addressed:\n\n1. **Why should we care about lower parameter count? Comparison with standard image compression techniques?**\n\n * Actually, the main purpose of this paper is not to further decrease the parameter count, which is indeed already very low as discussed. Instead, we would like to increase the average information contained by each parameter. Thus, in all the experiments, the parameter counts of our method and baseline methods are held the same. The results indeed reflect some advantages of our method.\n\n * Thanks for pointing out the image compression, a closely related research area, to us. In fact, dataset distillation and image compression are orthogonal, which means they can be adopted together to decrease the cost of storage, for both our method and baselines. As bases in this paper have the same shape / format with original images, image compression techniques are also applicable to our method. If we take the jpeg image compression technique into consideration, we have the following results on CIFAR-10 dataset with 10 bases per class for our method and 11 images per class for the baseline:\n\n | | Accuracy (%) | # of Parameters | # of KB after Compression |\n | :--------------: | :------------: | :-------------: | :-----------------------: |\n | Baseline | 65.92$\\pm$0.62 | 337,920 | 112.2 |\n | Ours | 70.27$\\pm$0.63 | 335,040 | 120.1 |\n | Ours - 1 channel | 68.98$\\pm$0.44 | 129,970 | 96.3 |\n\n We observe that image compression technique can have a comparable contribution to the cost of storage for both our method and the baseline, which means it is indeed orthogonal with dataset distillation in this paper. Moreover, we also find that we can even use 1-channel bases to reduce the cost of storage further without hurting much performance.\n\n2. **Comparison at the same dataset size.**\n\n * We agree with the reviewer that comparing at the same number of images is also meaningful in dataset distillation to reflect some properties. In this paper, if our method and the baseline use the exact same objective function for dataset distillation, the upper bound of information carried by our parameterization, which contains much less parameters, cannot exceed that by the original one, since the baseline method directly conducts optimization over the resulting images. As such, performance of the baseline, in reality, imposes **an upper bound** of that of our method when the same objective function is adopted, *i.e.*, w/o $\\mathcal{L}_{cos.}$. We show the comparison on CIFAR-10 dataset as follows:\n\n | \\# of Images | 10 | 20 | 30 | 40 | 50 |\n | :---------------------------: | :----------: | :----------: | :----------: | :----------: | :----------: |\n | Baseline | 65.3$\\pm$0.7 | 68.6$\\pm$0.5 | 70.4$\\pm$0.5 | 71.2$\\pm$0.3 | 71.6$\\pm$0.2 |\n | Ours | 65.3$\\pm$0.3 | 68.9$\\pm$0.3 | 70.8$\\pm$0.4 | 71.8$\\pm$0.3 | 72.2$\\pm$0.3 |\n | Ours w/o $\\mathcal{L}_{cos.}$ | 64.4$\\pm$0.4 | 67.7$\\pm$0.4 | 69.4$\\pm$0.6 | 70.8$\\pm$0.3 | 71.6$\\pm$0.2 |\n\n * Here for our method, we use 2 hallucinators and the number of bases is the number of images / 2. We can observe that under this setting, our performance can indeed approximate the theoretical upper bound of the baseline, with much less cost of storage and downstream training time.\n\n * With the proposed adversary contrastive constraint, however, our method can even outperform the baseline consistently, which further demonstrates the effectiveness of the proposed solution.\n\n * We have added the discussions and experiments on this aspect in Fig. 6 and the corresponding texts of the revised version.", " 3. **The stated motivation is not justified or verified.**\n\n * The motivation of our method is to store some common knowledge shared by samples in a dataset in hallucination networks. In this sense, the networks should be capable of extracting some sample-wise relations. For example, when different bases are sent to the same hallucinator, the images decoded by this hallucinator should demonstrate some shared property, which can be viewed as relations. By contrast, in the traditional parameterization, each image has to store a copy of such common knowledge. Thus, it does not learn relations among samples.\n * We agree with the reviewer that our hallucinators can capture better inductive bias. With such common inductive bias / knowledge encoded, our method provides the bases with more freedom to learn other more useful and sample-specific knowledge. In this way, it is capable of learning information of $|\\mathcal{H}|\\times|\\mathcal{B}|$ images with $|\\mathcal{H}|$ hallucinators and $|\\mathcal{B}|$ bases.\n * In other words, the proposed parameterization enables the increased number of images with comparable or even less storage and downstream training burden, which is a free lunch to incorporate more images.\n\n4. **Motivating example in Section 1 is misleading.**\n\n * Here, this is a motivating example to demonstrate that the original parameterization can even be improved by a simple trick, *i.e.*, combining images from different checkpoints. The impact of this behavior is introducing some closely related but different samples, which increases the diversity. However, such a naive way introduces additional cost of storage. Thus, we consider encoded the shared relationship with shared hallucinator networks, to achieve similar effect without increasing the cost of storage.\n * The main purpose of this example is not to compare with the baseline method under the same number of images. Thus, we do not choose to use the baseline method to learn more images directly. This evaluation can be found above and has been added to the revised version.", " 5. **Why use image-shaped \"basis\" rather than generic vectors?**\n\n * Thanks for the pertinent comment. We adopt this setting mainly for better explainablity.\n\n * In our conception, the bases are expected to capture basic information of images such as semantics and contours of contents, while hallucinators are expected to render the appearances such as colors and styles, which enjoys better explainablity as those shown in Fig. 4 and supplementary materials. Under this design, we consider the widely used encoder-decoder framework for pixel-wise image translation. Besides, since our bases share the same shape, or parameterization, with the raw images, it is more convenient and remarkable for this setting to reflect the advantage of our framework over the baselines.\n\n * The reviewer’s proposal to use generic vectors is definitely feasible and promising. In this setting, the encoder of the hallucinators can be removed. We provide the following experimental accuracy (%) on the CIFAR-10 dataset for different ways of parameterization. All the comparisons are conducted with the number of total parameters held the same. \n | BPC | 1 | 10 | 50 |\n | :--------------: | :------------: | :------------: | :------------: |\n | Ours | 55.66$\\pm$0.29 | 70.27$\\pm$0.63 | 74.04$\\pm$0.16 |\n | Ours w/o Encoder | 53.89$\\pm$0.20 | 69.66$\\pm$0.54 | 73.91$\\pm$0.26 |\n | Baseline | 49.89$\\pm$0.95 | 65.92$\\pm$0.62 | 70.73$\\pm$0.52 |\n\n We can find that the experimental results for the two settings are similar.\n\n6. **Please emphasize somewhere early in the paper that the increased dataset size is needed to achieve good performance.**\n\n * Sorry for the inconvenience. As explained above, we do not increase the dataset size. It is true that the number of final images increases without introducing additional cost of storage, which has been indicated in Line 56 to 61 of the revised version.\n\n7. **Please reconsider the choice of using DC to refer to the task and writing the paper as if DC introduced the task.**\n\n * Thanks for the comment. We have submitted the Rebuttal Revision which addresses this issue.\n\n8. **Some typos.**\n\n * Thanks for these detailed inspections. $L_{cts.}$ should be $L_{con.}$ and \"Constrain\" should be \"Constraint\". We have fixed them in the Rebuttal Revision.\n\n9. **Limitations.**\n\n * Thanks for pointing out some potential limitations. We admit that the added generator components makes inferior training efficiency of dataset distillation compared with baselines, including time and GPU memory cost, as indicated in Line 223 to 224. That is why we hope to enhance the training efficiency of the proposed method in future works, as indicated in Line 343. We have made a revision to add these potential limitations in the supplementary material of the revised version.\n\n***\n\n[1] Dataset Distillation, Wang et al., arXiv, 2018\n\n[2] Dataset Condensation with Gradient Matching, Zhao et al., ICLR, 2021\n\n[3] Dataset Condensation with Differentiable Siamese Augmentation, Zhao et al., ICML, 2021\n\n[4] Dataset Condensation with Distribution Matching, Zhao et al., arXiv, 2021\n\n[5] Cafe: Learning to Condense Dataset by Aligning Features, Wang et al., CVPR, 2022\n\n[6] Dataset Distillation by Matching Training Trajectories, Cazenavette et al., CVPR, 2022\n\n[7] Dataset Condensation via Efficient Synthetic-Data Parameterization, Kim et al., ICML 2022\n\n[8] Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks, Deng et al., arXiv, 2022\n", " We sincerely thank the reviewer for the insightful and constructive comments. We are glad that the reviewer finds our work novel and interesting. The concerns are fully addressed as follows.\n\n### About 3.1 - Basis and Hallucinator\n\n1. **The functionality of the encoder of the hallucinators.**\n\n * Thanks for the question. Since our bases have the same shape / format with final images, an encoder is required for feature extraction.\n\n * In our conception, the bases are expected to capture basic information of images such as semantics and contours of contents, while hallucinators are expected to render the appearances such as colors and styles, which enjoys **better explainablity** as those shown in Fig. 4 and supplementary materials. Under this setting, we consider the widely used encoder-decoder framework for pixel-wise image translation. Since our bases share the same shape, or parameterization, with the raw images, it is more convenient and remarkable for this design to reflect what is the input of hallucination networks and better to understand the advantage of our framework over the baselines.\n\n * The reviewer’s proposal to remove the encoder is definitely feasible and promising. It can be viewed as a variant of our formulation. In this setting, the bases become one type of latent code as those in typical generative models. We provide the following experimental accuracy (%) on the CIFAR-10 dataset for different ways of parameterization. All the comparisons are conducted with the number of total parameters held the same, which is also the setting for all following experiments.\n\n | BPC | 1 | 1* | 10 | 10* | 50 | 50* |\n | :--------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |\n | Ours | 48.26$\\pm$0.84 | 55.66$\\pm$0.29 | 70.27$\\pm$0.63 | 70.55$\\pm$0.38 | 74.04$\\pm$0.16 | 73.26$\\pm$0.37 |\n | Ours w/o Encoder | 47.71$\\pm$0.77 | 53.89$\\pm$0.20 | 69.66$\\pm$0.54 | 68.98$\\pm$0.44 | 73.91$\\pm$0.26 | 71.52$\\pm$0.23 |\n | Baseline | 49.89$\\pm$0.95 | 49.89$\\pm$0.95 | 65.92$\\pm$0.62 | 65.92$\\pm$0.62 | 70.73$\\pm$0.52 | 70.73$\\pm$0.52 |\n\n **Here, * denotes that we consider channel-independent basis, to send each channel of basis to hallucinators independently, which is equivalent to 1-channel basis. We can find that the experimental results of with encoder and w/o encoder are similar for 3-channel basis. But the performance drops a lot (greater than 1% accuracy) for 1-channel basis. We conjecture that 1-channel basis would more rely on encoder for sufficient feature extraction.**\n\n2. **Using shared encoder and decoder across all the hallucinators.**\n\n * Thanks for the advice. It can indeed be more parameter-efficient to learn multiple style vector pairs $(\\sigma,\\mu)$ for a particular encoder and decoder pair. It also can be viewed as a variant of our formulation. To reflect the impact of such design, we conduct an experiment which tends to use shared encoder and decoder across all the hallucinators:\n\n | BPC | 1 | 1* | 10 | 10* | 50 | 50* |\n | :------------------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |\n | Ours | 48.26$\\pm$0.84 | 55.66$\\pm$0.29 | 70.27$\\pm$0.63 | 70.55$\\pm$0.38 | 74.04$\\pm$0.16 | 73.26$\\pm$0.37 |\n | Ours w. Shared Enc. & Dec. | 46.47$\\pm$0.19 | 55.14$\\pm$0.44 | 69.47$\\pm$0.09 | 70.22$\\pm$0.37 | 72.69$\\pm$0.39 | 72.99$\\pm$0.04 |\n | Baseline | 49.89$\\pm$0.95 | 49.89$\\pm$0.95 | 65.92$\\pm$0.62 | 65.92$\\pm$0.62 | 70.73$\\pm$0.52 | 70.73$\\pm$0.52 |\n\n We can find that using a shared encoder and decoder may lead to a little bit inferior performance. We conjecture that different convolution encoders and decoders may contribute to the diversity of the extracted patterns, which increases the representation ability of the hallucinator set. Moreover, since we only use 1 convolution block for encoders and decoders, the number of parameters for encoder and decoder in a hallucinator (168) is not so significant compared with that of a basis (3072).\n\n * We have added discussions on this aspect in Tab. 5 and the corresponding texts of the supplementary material.", " 3. **Why do we only use 1 Conv-ReLu block for hallucinators?**\n\n * Thanks for the question. We use light-weight hallucinators with only 1 non-linear block, based on the trade-off between performance and data / training efficiency.\n\n * Considering more hidden layers or expanding the width of hallucinators may potentially help learn more information, which can be validated by the following experiment on CIFAR-10 with 1 BPC:\n \n | \\# of Non-Linear Block | 0 | 1 | 2 | 3 |\n | :------------------------------: | :------------: | :------------: | :------------: | :------------: |\n | Accuracy (%) | 68.43$\\pm$0.37 | 70.27$\\pm$0.63 | 71.17$\\pm$0.29 | 71.55$\\pm$0.27 |\n | Downstream Speed (epochs / sec.) | 144.54 | 140.11 | 125.04 | 115.62 |\n | \\# of Parameters | 6,144 | 6,312 | 10,963 | 16,131 |\n \n | Width | 3 | 8 | 16 |\n | :------------------------------: | :------------: | :------------: | :------------: |\n | Accuracy (%) | 70.27$\\pm$0.63 | 70.47$\\pm$0.37 | 71.28$\\pm$0.35 |\n | Downstream Speed (epochs / sec.) | 140.11 | 138.48 | 135.12 |\n | \\# of Parameters | 6,312 | 16,827 | 33,651 |\n \n The results show that increasing the depth from 1 to 2 makes the speed decrease by 10.8% but only has 0.9% performance gain. Increasing the width from 3 to 8 makes the number of parameters increased by 167% but only produces 0.2% performance gain.\n \n * Although increasing depth or width of hallucinators can somehow increase the performance, we do not want to sacrifice too much on the cost of downstream training time and storage, both of which are important factors in dataset distillation. That is why we only consider light-weight hallucinators with 1 Conv-ReLu block, which yields the best trade-off between accuracy and efficiency.\n \n * We have added discussions on this aspect in Tab. 3, 4 and the corresponding texts of the supplementary material.", " ### About 3.2 - Adversarial Contrastive Constraint\n\n1. **Why do we use a contrastive objective for the feature extractor and a cosine similarity for the hallucinator?**\n\n * Thanks for the insightful question. We empirically find that such configuration yields best results in practice. But it is also fine to consider other choices.\n\n * The objective for the feature extractor is to minimize the divergence between two samples while one objective for our dataset factorization is to maximize such divergence. Intuitively, any meaningful metric, such as cosine similarity and mutual information, to reflect such divergence would be fine.\n\n * In our early exploration, we indeed considered using the cosine similarity objective for both the feature extractor and the hallucinators. Later, we found that replacing it with mutual information for the feature extractor can produce slightly better performance, as shown in the results below. Since the feature extractor wants to minimize the divergence, or maximize the mutual information, we adopt the lower bound of the mutual information, which has the contrastive form.\n \n | BPC | 1 | 10 | 50 |\n | :---------: | :------------: | :------------: | :------------: |\n | Ours | 55.66$\\pm$0.29 | 70.27$\\pm$0.63 | 74.04$\\pm$0.16 |\n | Ours (Cos.) | 55.61$\\pm$0.94 | 70.09$\\pm$0.39 | 73.34$\\pm$0.36 |\n | Baseline | 49.89$\\pm$0.95 | 65.92$\\pm$0.62 | 70.73$\\pm$0.52 |\n \n * If we analyze the form of the lower bound of the mutual information and the cosine similarity in detail, we can find that the difference is only on whether we should take negative samples into consideration, which can be viewed as a kind of normalization. Since directly enforcing a high similarity is probably harmful for the feature extractor to learn a discriminative representation, considering negative samples and regulating the relative similarity can perform better in this sense.\n\n### About 4.1 - Datasets and Implementing Details\n\n1. **How long does the training take with the new parameterization compared to baselines?**\n * Thanks for the question. In the experiments, we use twice the number of iterations of that adopted in the official baseline setting. We have to admit that the convergence indeed becomes slightly slower compared with that of the baseline method, which is an unfortunate fact given the nature of our task configuration. In practice, however, this training efficiency is of minor concern for the dataset condesntaiton task, since the training take places for only once. As discussed in the conclusion and limitation, in our future work, we will explore improving the training efficiency of HaBa.\n\n2. **Some combination of hallucinator and basis may not be appropriate.**\n * Thanks for this pertinent point. Actually, in our exploration, we also experiment using an independent set of hallucinators for each class. As shown in Tab. 4 of the main paper, the performance gain is actually very limited, and it can increase additional cost on storage. That is why we adopt shared hallucinators across all the classes by default.\n * Analyzed through the qualitative results in Fig. 4, we find it is not the case that one hallucinator can only generate one specific style. Instead, it can be somehow adaptive to the input bases. For example, if we input the bases of dogs to $H_1$, the generated images are equipped with reasonable styles of dog. And if we input the bases of ships to $H_1$, the styles are also reasonable to ships (the background is sea-like blue).\n * We agree with the reviewer that it is promising to take class-wise relationships into consideration in future works. It is probably the optimal solution that one class shares hallucinators with some specific classes but does not share with others, which is more flexible. We have added it as a future direction in the revised supplementary material.\n3. **About hyperparameters.**\n * Thanks for the suggestion. Actually, most of the hyperparameters in HaBa come from the baseline method. As for the dataset factorization framework itself, the hyperparameters are only weights of those loss terms.\n * We have added a summary of all the hyperparameters in Tab. 7 of the revised version as the reviewer suggested to provide a more convenient view.", " ### About 4.3 - Ablation Studies\n\n1. **Why does the proposed method improve the cross-architecture performance?**\n * Thanks for the question. Actually, the proposed method brings a more informative parameterization of condensed datasets. With the scheme of factorization, information of $|\\mathcal{H}|\\times|\\mathcal{B}|$ images can be included with $|\\mathcal{H}|$ hallucinators and $|\\mathcal{B}|$ bases. More useful amount of data would be beneficial for training downstream models in machine learning. Please refer to the TSNE visualization in the supplementary material.\n2. **Why can diversifying training data increase the robustness to corruption?**\n * Thanks for the question. In the theory of learning from different domains [1], the test error in the target domain is dominated by 1) the error in the source domain and 2) the distance between source and target domains. We would like to contribute the robustness to corruption to the former. As validated in the previous experiments, our method can achieve lower error on the original CIFAR-10 dataset. Thus, we may also expect better performance on the corresponding corrupted data compared with baseline methods.\n * We have added more discussion in Fig. 5 and the corresponding texts of the supplementary material.\n3. **Ablation studies on the depth and width of hallucinators.**\n * Thanks for the suggestion. We have included these studies in the revised version. Please refer to **About 3.1 - Basis and Hallucinator - 3** for details of the experimental results.\n\n### Misc\n\n* Thanks for these detailed inspections and terribly sorry for the unclarity. We have fixed all the issues in the revised version. Please refer to the Rebuttal Revision for details.\n\n### Limitations\n\n1. **About hyperparameters.**\n\n * Thanks for the comment. Please refer to **About 4.1 - Datasets and Implementing Details - 3**.\n\n2. **How does the proposed solution work for other modalities?**\n\n * Thanks for pointing this out. In fact, the proposed dataset factorization can also work for other modalities. The principle is that bases can have the same parameterization with the raw data and hallucinators can be the translation model in the corresponding modalities.\n\n * Here, we adopt the framework of IDC [7] as the baseline and conduct an experiment on Mini Speech Commands [8], a dataset for speech recognition, following their original setting. The results are as follows:\n\n | BPC | 10 | 20 | Full Dataset |\n | :------: | :--: | :--: | :----------: |\n | Ours | 74.5 | 84.3 | 93.4 |\n | Baseline | 73.3 | 83.0 | 93.4 |\n\n * We have added these results in the revised supplementary material.\n\n3. **Some optimization issues when scaling the method to many bases and hallucinators.**\n\n * We agree with the reviewer that when the number of bases and hallucinators are large, there would be some optimization issues. As shown in Fig. 6 (right) of the main paper, further increasing the number of hallucinators does not result in substantial performance gain.\n * Actually, such limitation is largely inherited from the adopted baseline methods. For example, when the number of images is large (greater than 50), the performance gain is also somewhat limited for MTT [6].\n\nWe have added these potential limitations in the revised version.\n\n***\n\n[1] A Theory of Learning from Different Domains, Ben-David et al., Machine Learning, 2010\n\n[2] Dataset Condensation with Distribution Matching, Zhao et al., arXiv, 2021\n\n[3] Cafe: Learning to Condense Dataset by Aligning Features, Wang et al., CVPR, 2022\n\n[4] Herding Dynamical Weights to Learn, Welling et al., Machine Learning, 2009\n\n[5] Active Learning for Convolutional Neural Networks: A Core-Set Approach, Sener et al., ICLR, 2018\n\n[6] Dataset Distillation by Matching Training Trajectories, Cazenavette et al., CVPR, 2022\n\n[7] Dataset Condensation via Efficient Synthetic-Data Parameterization, Kim et al., ICML 2022\n\n[8] A Dataset for Limited-Vocabulary Speech Recognition. Warden et al., arXiv 2018", " This paper proposes a new parameterization (HaBa) of image data for dataset condensation. The proposed method decomposes a dataset into two components: data hallucination networks and bases, and considers the relationship between different condensed data points. Experiments show that HaBa achieves a better compression rate and better cross-architecture generalization performance. This paper is well-motivated and focuses on an essential problem (the parameterization of the condensed data) for current dataset condensation methods. The proposed method is novel, and the results are encouraging.\n\n- Originality: Decomposing the condensed data to bases and hallucinators is novel and interesting. Related work focusing on the parameterization of the condensed data: https://arxiv.org/abs/2205.14959, https://arxiv.org/abs/2206.02916.\n- Quality: Most arguments are well-supported, and the experiments demonstrate the effectiveness of the proposed method. Some additional ablation studies and experiments are needed to make the paper more convincing.\n- Clarity: This paper is well-written and easy to follow, though the position of some figures and tables can disrupt the reading experience. Some training details can be added to the appendix.\n- Significance: The experiments demonstrate the effectiveness of the proposed method and can inspire some future work on the parameterization of the condensed data. 3.1 Basis and Hallucinator:\n- I am curious about the functionality of the encoder of the Hallucinators. Is it necessary? What will happen if you remove the encoder part and only train the decoder and corresponding style vector?\n- I wonder if it would be more parameter-efficient to learn multiple style vector pairs ( $\\sigma$ , $\\mu$ ) for a particular encoder and decoder pair.\n- I guess the weights can learn more information than the fixed basis. Why do you only use 1 Conv-ReLu block for Hallucinators? What will happen if you consider more layers in the encoder and decoder structure? Why not consider increasing the capacity of the hallucinator by expanding its width?\n\n3.2 Adversarial Contrastive Constrain\n- Why do you want a contrastive objective for the feature extractor and a cosine similarity for the hallucinator? Could you have just one objective (e.g., cosine objective) for both the feature extractor and the hallucinator (the feature extractor maximizes it, and the hallucinator minimizes it)?\n\n4.1 Datasets and Implementing Details\n- How long does the training take with the new parameterization compared to baselines? Is the optimization easier or harder with the new parameterization?\n- I can imagine that it is possible that some combination of hallucinator and basis may not be appropriate and introduce bias into the downstream classifier. For example, the dog's style may not be suitable for a ship. Suppose you train the downstream classifier on a ship instance with dog style. Would the classifier become confused at test time? What is your opinion on that?\n- Because the proposed methods have so many hyperparameters, I suggest adding a table summary of the hyperparameters used for each experiment in the appendix.\n\n4.3 Ablation Studies\n- Cross-Architecture Performance: Lack of explanation. Why does it improve the cross-architecture performance? What is the insight behind it? What design choice gives this nice property?\n- Cross-Domain Performance: Maybe change the title to \"robustness to corruption\"? I am unclear why \"diverse training data\" can give this nice property.\n- I would like to see some ablation studies on what will happen if we increase the hallucinator capacity (increase depth or width).\n\nMisc\n- Line 63: \"Such an adversarial scheme, in turn, enforces the hallucinators to produce diversified images and recover original data distribution thoroughly.\" The argument that \"recover original data distribution\" appears many times in the paper. I don't think dataset condensation methods aim to recover the data distribution. They are mining for informative examples instead. How come the condensed data recovers the original data distribution? Visually, the condensed data looks very different from the real data. What is your opinion on this? I would be more comfortable with an argument like \"capture the discriminative feature better.\"\n- Line 89: \"supervision signals from downstream tasks, e.g., cross-entropy loss for classification, are not rigorous enough for synthetic samples.\" What do you mean by \"rigorous enough\"? What do you consider a rigorous supervision signal?\n- Line 211: Typo. 10, 100, and 200 -> 10, 100, and 100\n- Figure 4 and Table 1 are far away from their text, which disrupts the reading flow.\n- Table 4 is confusing since it includes two different comparisons. Splitting into two tables or moving one to the appendix seems better.\n\nI would be happy to increase my score if my questions can be addressed appropriately. The authors may want to discuss the limitations of their methods. Here are some of my incomplete thoughts.\n- There are so many hyperparameters in the proposed method that may impede its practical usage.\n- The parameterization only works for image data but may want to consider other modalities.\n- When scaling the method to many bases and hallucinators, there may be some optimization issues.", " The paper investigates different ways to parametrize the synthetic learned/distilled dataset in the Dataset Distillation [1]. In particular, they propose to reparametrize the dataset as $\\{G_j(z_i)\\}_{ij}$, where $z_i$'s are \"basis\" (i.e., latents in usual generative modelling terminology), and $G_i$'s are the \"hallucinators\" (i.e., generators). (I took liberty to use variable $z$ rather than the $\\hat{x}$ in paper, for better clarity). \n\nThis reparametrization can be essentially applied with any Dataset Distillation method. In particular, the authors show that combining it with MTT (the SOTA Datasett Distillation work), can learn better distilled datasets **with same total number of parameters but with a much larger dataset size**. In fact, the performance gain mostly come from the increased dataset size (see below).\n\n\n[1] Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018. I will first discuss what I believe are two important issues of the current paper, and then provide a list of strengths and weaknesses.\n\n### **Parameter count and dataset size**\n\nIt is important to note that\n+ The method essentially is a **compression** of the distilled dataset in terms of #parameters, but **not in** dataset size, the usual quantity of interest in Dataset Distillation. The authors unfortunately do not make this clear in most cases when claiming improvements, which in reality comes at a cost of larger distilled dataset size.\n+ It is unclear why #parameters should be a meaningful metric for Dataset Distillation. The paper does not provide arguments for it, or compare with any image compression methods.\n+ When comparing with same **distilled dataset size**, the proposed approach often obtains slightly inferior performance (than the original pixel parameterization). This runs directly in contrary of the authors' claim that optimizing pixels directly is difficult for learning relations between samples (lines 29-33,43-44,121-124,329-331).\n\n### **Improper credit assignment in writing**\n\nI believe that it is greatly damages the integrity and openness of the academic community to \n+ Refer to this task as Dataset Condensation (DC) [2] rather than Dataset Distillation (DD) [1] \n+ Write as if the DC paper [2] introduces this task (entire paper, esp. introduction and abstract; DD is only mentioned once in related work and as a row in results table), \n\nwhen\n+ Both DD [1] and DC [2] talk about the same task, with DD paper predating DC paper for **2 years**, DC paper citing DD paper and acknowledging that they investigate the same task,\n+ Neither DD paper nor DC paper explicitly gives a name to the task, but calls their proposed method DD or DC, and seems to refer to the task as DD or DC. \n\nIt is **extremely misleading** to people not familiar with this area, and essentially assigns credit in an obviously wrong way. I sincerely hope the authors did not do this purposefully. Admittedly, some other papers do the same thing, but it is no excuse to keep doing the harm. I strongly urge the authors to revise the paper in this aspect.\n\n### **List of Strengths and Weaknesses**\n\n**Strengths:**\n+ The authors show that, by reparametrization with deep generators in Dataset Distillation training, it is possible to further compress a distilled dataset into fewer parameters, without losing much performance. \n+ Therefore, with the same parameter count, the parametrization achieves better Dataset Distillation performance (at the cost of larger distilled datasets). If parameter count turns out a useful metric in future, this can potentially have better use cases, after validating its benefits against other image compression techniques.\n\n**Weaknesses:**\n+ ~The stated motivation for proposed reparametrization is that (original) pixel parametrization don't learn relations. This claim is not immediately clear why it is true, and is not verified. In fact, results in paper show that parametrization does not help when the dataset size is kept the same.~\n+ ~The motivating example in Section 1 is misleading.~\n+ The paper mentions reduced storage cost, but it is unclear why it should matter when the distilled dataset often only contains tens or hundreds of images.\n+ ~Writing is somewhat misleading in that whenever performance improvement is mentioned, the increased dataset size is almost never mentioned.~\n+ ~No comparison with standard image compression techniques.~\n+ Missing comparison with other DD parametrization work.\n\nPlease see my questions and suggestions below.\n\n[1] Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018.\n\n[2] Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. Dataset condensation with gradient matching. arXiv preprint arXiv:2006.05929, 2020.\n\n---- \nThe author's comment addressed most of my concerns (striked through text above). I have adjusted my score accordingly. 1. **Why should we care about lower parameter count? Comparison with standard image compression techniques?**\n\n Dataset Distillation (DD)'s scientific inquiry and practical applications both rely on the smaller distilled dataset size. Are there any applications that strongly relies on parameter counts? Indeed, smaller storage cost is always better, but I'm not sure if it matters that much when the distilled dataset often just contain tens or hundreds of images. Suppose that we care, then the authors should compare with regular compression techniques, including lossless and lossy ones, image-specific and generic ones.\n\n As an example, a simple lossless GIF compression of [a publicly available image from the distilled dataset of CIFAR-10 trained using MTT](https://georgecazenavette.github.io/mtt-distillation/images/cifar10_zca_10/airplane_4.png) is just 1923 bytes (with meta info!), versus the raw 3072 bytes. It should be easy to compress much further, and essentially get much smaller parameter count.\n\n2. **Given the importance of distilled dataset size in Dataset Distillation research, the paper should also compare methods at the same dataset size.**\n\n E.g., the proposed method at **BPC(bases-per-class)=10** has **IPC(images-per-class)=50**, but is only compared with other methods run at **IPC=11**. In fact, when fixing the dataset size, the proposed method is often slightly inferior (e.g., BPC=10 vs IPC=50). \n\n The current comparison is done at the same parameter count, which is only meaningful if the emphasis on parameter count cannot be sufficiently justified (1st question). In either case, same dataset size (IPC) comparison should be done.\n\n3. **The stated motivation is not justified or verified.** \n\n The motivation for the proposed bias-hallucinator parameterization is that the original pixel parametrization don't learn relations among samples. (lines 29-33)\n\n 1. What exactly is \"relations\" referring to here? \n\n 2. The pixel parametrization is the most flexible and can represent any images. Admittedly, different parametrizations lead to different inductive biases / implicit regularizations. Is it correct to understand the claim as a hypothesis that the bias-hallucinator parameterization has better inductive bias to learn better distilled datasets due to sharing the same hallucinator across biases? \n\n If so, the hypothesis seems false from the experiment results (see 2nd question above). In other words, the stated performance gain mostly come from the increased distilled dataset size, rather than from the proposed parametrization.\n\n3. **Motivating example in Section 1 is misleading.**\n\n It is a comparison among **(1)** proposed parameterization **(2)** original pixel parametrization w/ same #parameters and thus much less \\#images **(3)** same as (2), but combining images from different checkpoints to form a larger distilled dataset.\n\n (3) is a really bizarre and misleading choice because the original parametrization is perfectly capable of learning a larger distilled dataset, and (3) is not really reducing any \\#parameter or storage cost either. In fact, the later experiment results show that simply using original parametrization to distill a larger set have similar performance as (1). This makes the choice of (3) quite suspicious.\n\n3. **This is essentially generative modelling with a discrete set of latents. Why use image-shaped \"basis\" rather than generic vectors?**\n\n It is not really wrong, but a somewhat weird choice given the common practice of using generic latent vectors. An explanation would be better to have.\n\n4. **Please emphasize somewhere early in the paper that the increased dataset size is needed to achieve good performance.**\n\n A reader should not have to reach experiment section and to do the arithmetic to realize that the proposed technique comes at a cost.\n\n5. **Please reconsider the choice of using DC to refer to the task and writing the paper as if DC introduced the task (see first section).**\n\n6. The paper have some quite confusing places. Here are a few:\n\n\n 1. Fig. 3 mentions a $\\mathcal{L}_{cts.}$ that is not mentioned anywhere else.\n\n 3. \" Constrain\" => \" Constraint\" in multiple places. I do not think that the limitation is sufficiently discussed. As I emphasized above, the authors should make it clear that the claimed improvements come with the cost of much larger distilled dataset size, the one metric that Dataset Distillation research cares about the most. Furthermore, the added generator components should increase both training time and the training GPU memory cost. The training time effect is not really discussed other than a sentence in the checklist, which is not part of the paper.", " This paper proposes a novel algorithm for dataset condensation and introduces a dataset factorization approach - HABA which frame the factorization as a hallucinator-basis problem. This paper also introduces a pair of adversarial contrastive constrains to increase the diversity of generated images and inject more discriminant information into the factorization. ### Strengths\n1. This paper is well written and easy to follow, and the presentation is clear.\n\n2. The overall idea is novel and the experimental results are convincing.\n\n### Weaknesses\nI have to be honest that I'm not an expert in this area, but I have a few questions about this paper.\n\n1. Since the author has mentioned that the early dataset compression methods are inspired by the knowledge distillation, I'm wondering if these compressed data can be used for KD methods.\n\n2. Although this paper has performed the experiments across different architectures, however, I'm wondering about the effect of the model size on the final performance. (i.e. the performance gap between the compressed data with the whole data on ResNet18 will be greater or smaller than ResNet-101 ?) See Weaknesses. adequately addressed" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 2 ]
[ "W8qR1qGYvTc", "OmEFD2NrCWD", "OmEFD2NrCWD", "z0HfmKk-4RU", "N6z1PnHylF", "HGs-UzXzXJE", "2iU8l6tmr1g", "X4Cr4MqhiL", "lbhXrLFICIH", "FJiyiR5tvuk", "quiqq_LX7V1", "WRkI5WaxMWF", "WRkI5WaxMWF", "WRkI5WaxMWF", "WRkI5WaxMWF", "Bj8AZEglkxx", "Bj8AZEglkxx", "Bj8AZEglkxx", "Bj8AZEglkxx", "nips_2022_luGXvawYWJ", "nips_2022_luGXvawYWJ", "nips_2022_luGXvawYWJ" ]
nips_2022_q6bZruC3dWJ
Teach Less, Learn More: On the Undistillable Classes in Knowledge Distillation
Knowledge distillation (KD) can effectively compress neural networks by training a smaller network (student) to simulate the behavior of a larger one (teacher). A counter-intuitive observation is that a more expansive teacher does not make a better student, but the reasons for this phenomenon remain unclear. In this paper, we demonstrate that this is directly attributed to the presence of \textit{undistillable classes}: when trained with distillation, the teacher's knowledge of some classes is incomprehensible to the student model. We observe that while KD improves the overall accuracy, it is at the cost of the model becoming inaccurate in these undistillable classes. After establishing their widespread existence in state-of-the-art distillation methods, we illustrate their correlation with the capacity gap between teacher and student models. Finally, we present a simple Teach Less Learn More (TLLM) framework to identify and discard the undistillable classes during training. We validate the effectiveness of our approach on multiple datasets with varying network architectures. In all settings, our proposed method is able to exceed the performance of competitive state-of-the-art techniques.
Accept
This paper makes an interesting observation on knowledge distillation such that excluding certain undistillable classes improves performance. This observation is quite interesting and potentially impactful for a better understanding of knowledge distillation. The authors use this observation to consistently improve the existing knowledge distillation methods in the experiments. One weakness is that the explanation of "why" it is beneficial to exclude certain classes is not very satisfactory. Nevertheless, the strength of this paper outweighs the weakness. All the reviewers are positive about this paper and I also recommend acceptance.
test
[ "iPwXuaOIv8r", "EAjZDOSQwx", "hlMnOTVqZ3i", "osaeg2lotp", "PRnwyIa_SJO", "1tSSeS72JBH", "xULoARL9u-0", "WJM7CAYsEZff", "T5LLweSo3DJ", "Q2Q2QoPLK3g", "C2acSNOB3M2", "3K_sozdSWEu", "z3Ay6p2EfjE", "WRs496SPxay", "czJzjh2S3H7", "jszy-HpoXIL", "cWh8Hv4KR4", "Xcpf0lKynj", "XvvhBJP9XJ8" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for additional results. I have increased my recommendation, although I stand by my review regarding the lack of concrete understanding of undistillable classes.", " Thanks for your reply. The additional analysis of the undistillable classes from the Supplementary is interesting and solves my concerns. I believe this finding can inspire the community to understand KD better. Thus, I keep my positive rating. ", " We are glad that our initial rebuttal addressed some of your concerns. Here, we are focusing on the parts (primarily three points) where we recon more justification is necessary to clarify the point that our approach is not an improvement of ESKD.\n&nbsp;\n1. The **motivations** behind ESKD and TLLM are fundamentally different.\n\nESKD shows that teacher with higher accuracy does not necessarily make better students. They conclude this is a **consequence of mismatched capacity**. Different from ESKD, our work attributes the “large teacher worse student” phenomenon to the **existence of undistillable classes** from a **data-driven standpoint**.\n\n&nbsp;\n2. Different motivation leads to varying methods.\n\nBecause ESKD thinks reducing the capacity gap between teacher and student is the key to resolving the “large teacher worse student” issue, they propose to leverage the early-stopping strategy to **train teacher model** and then adopt this **early-stopped teacher** to perform distillation on student model. They assume that applying regularization (early-stopping in their case) to teacher can reduce the capacity gap (highlighted in **Section 5.5 from the ESKD paper**.) Instead, our approach is to adopt class-wise early-stopping, which aims to **eliminate the adverse effect** brought by undistillable classes. \n\nFurthermore, we stress that ESKD proposed two early-stopping strategies. The first is to apply early-stopping during distillation (the methodologically similar one to TLLM), and the second is to apply early-stopping on teacher. Note that the former (the one reviewer think is methodologically similar to TLLM) **does not** change the observation that, quoted in Section 5.3 of ESKD, “larger, more accurate teachers do not result in more accurate students.” In contrast, we demonstrate that TLLM allows **larger teachers to make better students** (Section 4.2) and perform significantly better than ESKD (see R1C1 for details). It highlights the importance of our motivation and the difference in our methods, which makes, at first glance, a methodologically similar approach, much more effective than ESKD.\n&nbsp;\n\n3. Our contribution is more than TLLM\n\nWe also emphasize that our contribution is more than TLLM. As we have stated in our paper, we examine the “large teacher worse student” phenomenon from a data-centric perspective and provide some interesting observations on undistillable classes that are well supported by extensive experiments. Our proposed technique, despite being simple, is very effective in partially resolving this issue. In our revision, we included analysis to help readers to gain some understanding of the characteristics of these classes. We believe that our work offers a novel perspective on the analysis of knowledge distillation.</li>\n\n", " We are glad that our initial rebuttal clarified most of your concerns. In R4C4, the TLLM use LS. Here, we provide an ablation study which **remove LS from TLLM**. The experiments are similar to table in R4C4. The experiments results are the following:\n\n| Method | ResNet32x4/ResNet8x4 | ResNet50/MobileNetV2 | ResNet50/ShuffleNetV1 |\n|---------|----------------------|----------------------|-----------------------|\n| Student | 72.50%| 64.60%| 70.50%|\n| KD | 73.08%| 67.28%| 74.07%|\n| LS | 73.86%| 65.11%| 70.78%|\n| **TLLM w/o LS** | 75.11% | 69.17% | 76.13% |\n| **TLLM w/ LS** | 75.53%| 69.54%| 76.67%|\n\nThe experimental results between TLLM w/o LS\" and \"TLLM w/ LS \" demonstrate that adopting LS in TLLM only improves the overall performance **marginally**. This is because it serves only as a replacement for the regularization effect, which has been removed when undistillable classes are discarded. However, when LS is removed from TLLM (TLLM w/o LS), we can still observe a noticeable improvement in accuracy compared to KD. It again verifies that the **performance gain indeed comes from TLLM** itself.\n\nAlso, we admitted that the concrete understanding of undistillable classes remains unclear. Nevertheless, we believe our work provides an interesting and practical new perspective on understanding the “large teacher worse student” phenomenon and knowledge distillation in general.\n\n", " The author addresses some of my concerns. This paper is an improvement on ES that uses a subset of the dataset, specifically a subset of classes, to distill students. Nonetheless, there remains a lack of explanation as to why the undistillable subset appear as classes when undistillable classes lack consistency across different models and methods.", " Thank you authors for the great effort on the rebuttal. Authors have addressed most of my concerns although concrete understanding of undistillable classes still remains unclear.\n\n**Question**: For results reported in R4C4, does TLLM use LS? Do you have the performance of TLLM when no LS is applied (Only use standard cross entropy loss when learning from ground-truth labels in TLLM)?\n", " Dear reviewer WWqA :\n\nWe thank you for the precious review time and valuable comments. We have a detailed analysis of the undistillable class in our revision. We hope to discuss further with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,", " Dear reviewer jTkD:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses, including a comparison with ESKD and an analysis of undistillable classes, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,\n\n", " Dear reviewer NdAQ:\n\nWe thank you for the precious review time and valuable comments. We have provided all requested experiments and included an analysis of undistillable classes, which we believe have covered your concerns. We hope to discuss further with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,", " We would like to thank all the reviewers for their insightful comments and constructive suggestions. Below we provide our detailed response to each reviewer. We took into account all comments and suggestions to prepare the revised version, with major changes highlighted in blue (there are also other minor changes on wording and typos).\n\nTo summarize the most important aspects:\n\n- The new section Appendix E directly addresses the concern of R1C3 (of reviewer id: jTkD), R2C1 (of reviewer id: WWqA), R3C2 (of reviewer id: 6Hiv), and R4C1 (of reviewer id: NdAQ). We provide an analysis of the undistillable classes to understand the characteristics of these classes. Specifically, we discuss the correlation between hard classes of both teacher and student models) with undistillable classes. We investigate whether undistillable classes depend on teacher-student pairs, capacity gaps, and distillation methods. We also provide a visualization to help the reader gain a more intuitive understanding of undistillable classes.\n\n- In the new section Appendix F, we included more experiments on ImageNet (requested by R4C2, of reviewer id: NdAQ), direct comparison with label smoothing method (requested by R4C4, of reviewer id: NdAQ), experiments on CUB-200 (requested by R4C5, of reviewer id: NdAQ), and direct comparison with ESKD (requested by R1C1, of reviewer id: jTkD).\n\n- We included a more formal definition of the distillability in Section 2.1 to address the concern raised by R3C1 (of reviewer id: 6Hiv).\n\nWe thank the reviewers again for their time and valuable suggestions!", " We thank reviewer for the comments and reference reviewer with identifier WWqAas R2. Comment n of reviewr m is denoted as RmCn.\n\n\n**[R2C1]**\n> The paper lacks deep insight into these bad distillable classes. The method of this paper is similar to \"On the efficacy of knowledge distillation\". Both learn teachers at the beginning, and then gradually start to learn one-hot labels. The difference is that this paper argues that certain classes are the cause of the problem. So why do hard examples come in the form of categories, are all samples of these certain classes hard to distill?\n\nWe think the hard examples come in the form of categories because of the nature of supervised learning. The goal of supervised learning is to maximize inter-class distance. As a result, the feature representation of samples that are belonged to the same classes could be clustered together during training, and thus the hard samples form into categories. Furthermore, we confirm that not all samples are hard-to-distill.\n\nWe share the sentiment with the reviewer that there is room for improvement in terms of interpretability in the undistillable classes. Therefore, we perform an in-depth analysis to understand the undistillable classes. Specifically, we show that undistillable classes depend on teacher-student pairs and KD methods. We also verified that undistillable classes are not equivalent to hard classes in either the teacher model or the student model. Finally, we included a new section, Appendix E.\n\n\\\n**[R2C2]**\n> Would the author analyze why some undistillable classes are consistent across multiple experiments, while others are not? And in those consistent classes, is the same group of instances undistillable?\n\nWe thank the reviewer for raising these points. We realized the word \"consistent,\" as mentioned in the initial submission, is not rigorous. In our revision, we reorganized this part and added new experiments, which analyze the undistillable classes in diverse teacher-student pairs and distillation methods more thoroughly. Specifically, we observe undistillable classes depending on distillation methods and network. We only observe only five consistently undistillable classes under the different capacity gaps between the teacher and student model. Within these classes, we observe a tiny group of instances that are consistently undistillable. We believe these instances share no common property, and these samples are sensitive to knowledge distillation. ", " **Comment:**\n\nWe thank reviewer for the comments and reference reviewer with identifier jTkD as R3. Comment n of reviewr m is denoted as RmCn.\n\n**[R1C1]**\n> The concept and approach of the proposed method are different from ESKD[3] but somewhat similar in methodology for improving the KD performance; ESKD also halts distillation from teacher to student at the last phase. A direct comparative experiment with ESKD will enhance the completeness of this paper.\n\nWe agree with the reviewer's point that direct comparison with ESKD will enhance the completeness of this paper. We ran experiments on both CIFAR-100 and ImageNet1K to present a fair comparison with ESKD as follows:\n\n\n| Method | CIFAR100: ResNet50/MobileNetV2 | ImageNet1K: ResNet32/ResNet18 |\n|--------|--------------------------------|-------------------------------|\n| KD | 64.6% | 72.1% |\n| ESKD | 66.4% | 72.0% |\n| TLLM | **69.5%** | **72.1%** |\n\nThe results show a clear advantage of our approach. Notably, ESKD sets a hard early-stopping threshold during distillation and stops the entire distillation processing once the threshold is met. Compared to their naive strategy, which uses a hard threshold, our method tracks the per-class distillation loss and dynamically stops the distillation process in a class-wise manner, which encapsulates ESKD. The empirical study also supports our claim that early stopping on per-class distillability is critical to students' performance. We will include these results in the Appendix.\n\n\n\\\n**[R1C2]**\n> The proposed method, TLLM, is potentially impactful, but it should use significantly more computational resources and time for finding undistillable classes. Thus, this point is not useful from a practical point of view. It would be great if they could suggest an efficient way to find undistillable classes.\n\nWe appreciate the reviewer’s concern regarding the computational cost at train time and suggest that our approach may not be useful from a practical point of view. We think it is clear that knowledge distillation methods do not bring extra computational costs at test time. We want to stress that this is critical in most real-world scenarios where offline training resources are abundant, yet online computation budgets are tight.\n\nWe also agree with the reviewer’s insightful concern that overwhelming extra train-time computational cost may prevent using our method in practice. We point out that the **extra computational cost of TLLM is small**. Specifically, our method adopts the **same total training epochs** as the baseline KD methods. Our approach indeed needs to access the validation distillation loss with two extra forward-pass (**without backpropagation**) for each epoch. Generally, the most computational cost during training comes from backpropagation. Also, the **validation dataset is typically much smaller than the training dataset**. Therefore, TLLM would not introduce overwhelming computational costs compared to the baseline KD approaches. \n\n\\\n**[R1C3]**\n> They claim that unstillable classes are universes, but there is no basis for that. It is necessary to analyze the characteristics of the classes and why they interfere with distillation.\n\nTo clarify, we conclude that the undistillable classes universally exist based on our extensive experiments on multiple datasets, teacher-student pairs, and distillation methods. We agree with the reviewer that it is necessary to analyze the characteristics of the undistillable classes and understand why they interfere with distillation. Therefore, we perform an in-depth analysis to understand the undistillable classes. Specifically, we show that undistillable classes depend on teacher-student pairs and KD methods. We also verified that undistillable classes are not equivalent to hard classes in either the teacher model or the student model. We included a new section Appendix E.\n\n\n\\\n**[R1C3]**\n> Minor point 1) Line127, replace j with M \n2) Line163-5, please check the suitability of examples \n3) Line166, what is the meaning of crystal \n4) Figure 4, explain the meaning of the dashed line\n\nWe thank the reviewer for raising these points. We fixed 1) in our revision, we removed the unsuitable example as mentioned in 2), we replaced the word \"crystal\" with the word \"obvious\" (the current sentence is \"the distilled student model shows obvious class-dependent bias\") in 3), and we added an explanation of the dashed line (the average increased accuracy after distillation).\n\n", " **Comment:**\n\nWe thank reviewer for the comments and reference reviewer with identifier 6Hiv as R3. Comment n of reviewr m is denoted as RmCn.\n\n\n**[R3C1]**\n> A clearly mathematical definition of the “undistillable class” is missing. From section 2.1, we know that the “undistillable classes” are classes with bad distillability. However, a specific mathematical distillability measurement ||c is not given, which makes it difficult to understand the detailed implementation in this paper. As in Figure 5, we know that the undistillable class is a class whose accuracy after KD is lower than that before KD. However, it is unclear whether this is a necessity and sufficiency of undistillable class. It is also unclear about what specific criterion this paper uses to select samples in Figure 1. To this end, this paper should give a clear definition of it in the rebuttal letter.\n\nWe thank the reviewer for raising this point. To help understand the detailed implementation of our work, we provide a formal definition of the distillability for the measurement |c| based on the measurement of accuracy in the revision. We note that this is also the specific criterion that we use to select samples in Figure 1. Indeed, we cannot guarantee that our distillability measurement is a necessary and sufficient condition for the undistillable class. Overall, our work aims to explain the \"large teacher worse student\" phenomenon based on the empirical observation that there exist some classes in the student model which achieve inferior performance after distillation compared to their vanilla-trained counterpart. We hope our work can inspire the community to explore the necessary and sufficient conditions to define such a phenomenon in a rigorous manner.\n\n\n\n\n\\\n**[R3C2]**\n> More studies of the undistillable classes should be discussed. This paper mainly studies the proportion of undistillable classes, which is insufficient. More studies about the distribution of these classes should be explored. For example, whether these undistillable classes are challenging classes (i.e., classes with relatively low accuracy before KD), and whether these classes are network-independent.\n\nWe share the sentiment with the reviewer that more studies of the undistillable classes should be discussed. The reviewer also rightly asked whether these undistillable classes are challenging classes for teachers and whether these classes are network-independent. We compare the per-class accuracy of teacher model and student model with the change of accuracy of student model with and without KD. Our experiments cover three teacher-student pairs. The analysis shows that **undistillable classes are not equivalent to challenging classes**. Furthermore, we look into the undistillable class of three varying teacher-student pairs and observe that **undistillable classes are network-dependent**.\n\nTo strengthen our work, we conduct an in-depth analysis of undistillable classes. Specifically, we discussed whether different KD methods could result in different undistillable classes on the same teacher-student pair. We further show that undistillable classes are distillation-dependent. We also verify that undistillable classes are hard classes for students. In other words, we cannot determine whether a class is distillable based on either the teacher's prediction or the student's prediction. We hope our study can help readers to understand the observed phenomenon. We included these additional analyses in the new section Appendix E.", " **Comment:**\nWe thank reviewer for the comments and reference reviewer with identifier NdAQ as R4. Comment n of reviewr m is denoted as RmCn.\n\n**[R4C1]**\n> Although the work is technically sound, do you think the explanation / definition of undistillable classes is a bit overkilled? Can the following be a more simpler explanation (I will use ImageNet experiment as example): The teacher network is not perfect. That is the top1 accuracy of ResNet-50 teacher is 76.2% (Table 3). Therefore, the teacher tends to make a noticeable amount of errors. If class-wise analysis is performed on the teacher based on misclassified samples, it corresponds to a subset of classes which can be treated as undistillable classes. Therefore, the proposed TLLM avoids such situations by eliminating classes where the teacher's predictions are largely incorrect and replacing them with ground-truth labels (one-hot vectors). Some analysis I’d like to see are: 1) What is the average accuracy of ResNet-50 teacher (Table 3) for regular and undistillable classes? 2) Can you decompose the improvements obtained () in Tables 2, 3 for regular and undistillable classes?\n\nWe understand the reviewer's valid concern about the possible overkilled explanation/definition of undistillable classes. We think the reviewer suggests that the undistillable classes are misclassified samples where the teacher's predictions are largely incorrect (we refer to teacher's hard classes, where in the distillation scenario, they are classes with relatively low accuracy on the teacher model). And the main purpose of TLLM is to eliminate teachers' hard classes during distillation and replace them with ground-truth labels. \n\nWe respectfully disagree with the reviewer on this point. We ran additional experiments to test whether the teacher's hard classes are equivalent to undistillable classes. We evaluate three varying teacher-student pairs with KD on CIFAR-100. The per-class test accuracy of the teacher and compared with the change of test accuracy for the student with and without KD (the definition of undistillable classes) are recorded. Our results indicate no correlation between teachers' hard classes and undistillable classes. The details of these experiments are added to the new section Appendix E.2.\n\nTo directly address the reviewer's concern, we report the average accuracy of ResNet-50 teacher for regular and undistillable classes as follow: \nResNet50 regular-classes vs ResNet50 undistillable classes: 76.1% vs 76.2%\nWe note that the undistillable classes take over one-third of the overall classes. This result again verifies our previous observation that **teacher's hard classes are not undistillable classes**.\n\nAdditionally, when decomposing the improvements obtained in Table 2 (ResNet32x4/ResNet8x4 - KD) for regular and undistillable classes, the TLLM improves the undistillable classes by 1.33% and **3.74%**, respectively. It shows that our method indeed **improves the performance of undistillable classes**. \n\nOverall, we believe that these experiments verify our analysis and demonstrate that the explanation/discussion of undistillable classes cannot be simplified based on teacher's misclassified classes.\n\n**[R4C2]**\n> ImageNet results are insufficient to assess the efficacy of proposed TLLM: What is the reason for using only KR (Chen et. al) as the baseline for ImageNet experiments? Can the authors produce a table similar to Table 2 for ImageNet to show the improvements of the proposed TLLM scheme?\n\nOur previous version only reports the performance of TLLM based on KR because KR is a competitive baseline. We want to demonstrate that our approach can achieve noticeable improvement even on a state-of-the-art KD algorithm. We agree with the reviewer that similar experiments as for CIFAR-100 should be present for ImageNet1K. Thus, we ran additional experiments on ImageNet1K based on KD, AT, and SFTN, respectively.\n| Method | KD | TLLM + KD | AT | TLLM + AT | SFTN | TLLM + SFTN |\n|----------------------|-------|-----------|-------|-----------|------|-------------|\n| ResNet32/ResNet18 | 70.7% | 72.1%| 70.7% | 71.5%| 71.1%| 72.0%|\n| ResNet50/MobileNetV1 | 70.5% | 72.0%| 69.5% | 70.6%| 71.5%| 72.8%|\n\nOn both teacher-student pairs, our proposed method can improve over multiple distillation baselines.\n\n**[R4C3]**\n> The Baseline performance of MobileNetV2 in Table 3 (ImageNet) seems to be very low. top1 accuracy should be ~72.154% whereas the reported accuracy is only 68.9%. Please clarify. Link to public pytorch models with top1 accuracies: https://pytorch.org/vision/stable/models.html\n\nWe thank R4 for careful reviews and for pointing out this incorrect performance. This is a typo, and we actually conducted the experiments based on **MobileNetV1**. We evaluated MobileNetV1 since it is a conventional experiment presented in many previous works, i.e., DKD[7]. We sincerely apologize for this mistake; we fixed it in the revised version.", " R4C4: \n> If the authors use label smoothing for undistillable classes (Lines 66-68) in the student, it is not reasonable to compare with reported Baselines as label smoothing will further improve accuracy (by alleviating models’ overconfidence [1, 2, 3]). A more reasonable comparison will be to include the accuracies of students trained with label smoothing. Can you report these results for Tables 2, 3. I suspect that a noticeable amount of improvement can be obtained by using label smoothing, especially if undistillable classes are semantically similar classes (See [2, 3]).\n\nWe appreciate the reviewer's concern about the impact of label smoothing in our method. We share the view that label smoothing is a practical regularization technique that can alleviate the model's overconfidence. Typically, knowledge distillation [4,5,6] can also be considered as a regularization technique, which has similar functionality to label smoothing. Thus, **it is uncommon to perform knowledge distillation with label smoothing together**. In our work, we adopt label smoothing on those undistillable classes mainly because the KD is halted on these classes. The regularization effect brought by KD is removed on the undistillable classes. Thus, we leverage label smoothing to compensate for such a missing regularization effect.\n\nTo directly address the reviewer's concern, we ran experiments to compare the baseline in Table 2 with and without label smoothing, as requested. We present the results as follows:\n\n| Method | ResNet32x4/ResNet8x4 | ResNet50/MobileNetV2 | ResNet50/ShuffleNetV1 |\n|---------|----------------------|----------------------|-----------------------|\n| Student | 72.50%| 64.60%| 70.50%|\n| KD | 73.08%| 67.28%| 74.07%|\n| LS | 73.86%| 65.11%| 70.78%|\n| TLLM | 75.53%| 69.54%| 76.67%|\n\n\nThe LS denotes the label smoothing. We set the hyperparameter to 0.1 for all experiments, including TLLM. We can observe that the improvement of LS is marginal, and TLLM achieves significantly better performance compared to LS. We believe these empirical results can support that **the improvement of TLLM is not attributed to the use of label smoothing**. \n\nWe are also glad that R4 cited three excellent papers [1,2,3] that study the compatibility of **label-smoothed teacher** and knowledge distillation. We stress that these works study the phenomenon of **when the teacher model is trained with label smoothing techniques** and how to distill the student model with such a label-smoothed teacher. However, TLLM leverages label-smoothing during distilling the student network. **Thus, these works[1,2,3] are orthogonal to our work**. In addition, we have included [2] in our study (Figure 7 in the initial Appendix), which shows that label-smoothed teachers also suffer from undistillable classes. \n\nR4C5\n> CUB200 KD results are missing (Not available in Supplementary as well).\n\nWe thank the reviewer for raising this point. It appears that our description of the experiments caused some confusion. We stress that our initial version only evaluates TLLM on CIFAR-100 and ImageNet1K. As quoted in line 54, the experiments on CUB-200 are only used to verify the existence of undistillable classes. In Appendix, our implementation details on CUB-200 primarily correspond to the experiments of how we obtain the results in Figure 4 (top right) and Figure 10 in the Appendix. However, to expand upon our initial submission, we ran additional experiments on CUB-200 with two teacher-student pairs (ResNet50/ResNet18 and ResNet50/MobileNetV2). We compare with KD and present the results as the following:\n\n| Method | ResNet50/ResNet18 | ResNet50/MobileNetV2 |\n|---------|-------------------|----------------------|\n| Student | 76.89%| 79.20%|\n| KD | 79.96%| 81.18%|\n| TLLM | 81.15%| 82.26%|\n\nWe also achieved superior performance compared to the KD baseline. We included these experiments along with implementation details in the new section Appendix F.3. \n\nR4C6\n> What is the final optimization/loss function (preferably in equation form)?\n\nNotably, we leverage the change of per-class teacher curve (distillation loss) on the validation dataset to determine whether the distillation process of the particular classes should be terminated. **Our method did not use any extra loss function**.\n\nR4C7\n> What is the mixture parameter () used for label smoothing in undistillable classes?\n\nWe use 0.1 for all experiments.\n\n\n[4] Self-Distillation as Instance-Specific Label Smoothing, Neurips 2020\n\n[5] Revisiting Knowledge Distillation via Label Smoothing Regularization, CVPR 2020\n\n[6] Self-Distillation Amplifies Regularization in Hilbert Space, Neurips 2020\n\n[7] Decoupled Knowledge Distillation, CVPR 2022", " This paper introduced a new data-centric perspective on the phenomenon of 'larger teacher, worse student’. They search for the classes that disrupt the distillation process, they defined them as the undistillable classes, and then exclude them to improve the performance of the small student network when distillation; this framework is called 'Teach Less Learn More (TLLM)’. Specifically, to find the undistillable classes, they suggest two criteria; per class accuracy and agreement prediction agreement. Various ablation studies support their observation and experimental results on multiple datasets with varying networks validate the effectiveness of TLLM. ### Strengths\n- The observations are interesting and well supported experimentally. \n\n- The method is conceptually simple and seems to work well across the board. \n\n- Experiments are conducted with several neural network architectures and various datasets. \n\n### Weaknesses\n- The concept and approach of the proposed method are different from ESKD[3] but somewhat similar in methodology for improving the KD performance; ESKD also halts distillation from teacher to student at the last phase. A direct comparative experiment with ESKD will enhance the completeness of this paper. \n\n- The proposed method, TLLM, is potentially impactful, but it should use significantly more computational resources and time for finding undistillable classes. Thus, this point is not useful from a practical point of view. It would be great if they could suggest an efficient way to find undistillable classes. \n\n- They claim that unstillable classes are universes, but there is no basis for that. It is necessary to analyze the characteristics of the classes and why they interfere with distillation. \n\n- Minor point \n> Line127, replace j with M \n> Line163-5, please check the suitability of examples \n> Line166, what is the meaning of crystal \n> Figure 4, explain the meaning of the dashed line I was wondering if there is a practical way to apply TLLM over the new dataset. Please check the weaknesses and questions; in terms of practicality and analysis. ", " This paper propose that some classes is not distillable to students, and by removing these classes, the distillation performance would be improved. Pros:\n 1. The paper is well written and easy to follow.\n 2. The idea is simple and effective in practice. The observation that some classes are not distillable to the student is novel.\n 3. Experiment result is competitive against some mainstream KD methods.\n\nCons:\n 1. The paper lacks deep insight into these bad distillable classes. The method of this paper is similar to \"On the efficacy of knowledge distillation\". Both learn teachers at the beginning, and then gradually start to learn one-hot labels. The difference is that this paper argues that certain classes are the cause of the problem. So why do hard examples come in the form of categories, are all samples of these certain classes hard to distill? Would the author analyze why some undistillable classes are consistent across multiple experiments, while others are not? And in those consistent classes, is the same group of instances undistillable? Yes", " This paper aims to study the problem of “larger teacher, worse student” in knowledge distillation (KD). \nFrom a data-centric perspective, this paper claims that undistillable classes are the cause of the inefficacy of large teachers. Furthermore, this paper proposes a new KD framework called Teach Less, Learn More (TLLM) to address the issue. In detail, TLLM tries to identify the undistillable classes during training with a moving window and lets the student learn these classes directly from the ground truth labels. Strengths\n\n- The data-centric perspective is novel and interesting in the study of “larger teacher, worse student”, which understands the problem much better than the previous capacity mismatch explanation. Moreover, To the best of my knowledge, the per-class accuracy has not been explored in KD before. It deserves to be studied more in the future since the undistillable classes are universal. \n\n- Experiments are comprehensive and the results are convincing. This paper considers many teacher-student pairs and even some advanced architectures such as vision transformers (ViTs). Moreover, this paper considers both output-based KD and feature-based KD. \n\n\nWeaknesses\n\n- A clearly mathematical definition of the “undistillable class” is missing. \nFrom section 2.1, we know that the “undistillable classes” are classes with bad distillability. However, a specific mathematical distillability measurement ||c is not given, which makes it difficult to understand the detailed implementation in this paper. As in Figure 5, we know that the undistillable class is a class whose accuracy after KD is lower than that before KD. However, it is unclear whether this is a necessity and sufficiency of undistillable class. It is also unclear about what specific criterion this paper uses to select samples in Figure 1. To this end, this paper should give a clear definition of it in the rebuttal letter. \n\n\n- More studies of the undistillable classes should be discussed. \nThis paper mainly studies the proportion of undistillable classes, which is insufficient. More studies about the distribution of these classes should be explored. For example, whether these undistillable classes are challenging classes (i.e., classes with relatively low accuracy before KD), and whether these classes are network-independent. \n The explanation from per-class accuracy in general novel. Nevertheless, the above weaknesses refrain me to give a better score. I would be happy to improve my rating as long as the authors address the above weaknesses in the rebuttal. Yes, this paper clearly discusses the limitations and potential negative social impacts in the supplementary. ", " The major contributions of this paper are 2-fold:\n1) The thesis of this paper is to study the “larger teacher, worse student” phenomenon in knowledge distillation (KD) from a data-centric approach. The authors show that the above phenomenon happens due to the presence of undistillable classes in the teacher. This observation is shown across different KD methods, datasets and teacher-student architectures.\n\n2) The authors propose a simple, yet effective fix -- Teach Less, Learn More (TLLM) -- to identify and discard undistillable classes during student training, thereby improving the final accuracy of students.\n **Strengths:**\n1) This paper is written / presented clearly (especially Fig 5, bottom Middle). It is easy to follow.\n2) The proposed TLLM framework shows noticeable improvements on CIFAR-100 (across different KD methods, teacher-student architectures). \n\n\n**Weaknesses:**\n1) Although the work is technically sound, do you think the explanation / definition of undistillable classes is a bit overkilled? Can the following be a more simpler explanation (I will use ImageNet experiment as example): The teacher network is not perfect. That is the top1 accuracy of ResNet-50 teacher is 76.2% (Table 3). Therefore, the teacher tends to make a noticeable amount of errors. If class-wise analysis is performed on the teacher based on misclassified samples, it corresponds to a subset of classes which can be treated as undistillable classes. Therefore, the proposed TLLM avoids such situations by eliminating classes where the teacher's predictions are largely incorrect and replacing them with ground-truth labels (one-hot vectors). Some analysis I’d like to see are:\n- What is the average accuracy of ResNet-50 teacher (Table 3) for regular and undistillable classes?\n- Can you decompose the improvements obtained ($\\Delta$) in Tables 2, 3 for regular and undistillable classes?\n\n2) ImageNet results are insufficient to assess the efficacy of proposed TLLM: What is the reason for using only KR (Chen et. al) as the baseline for ImageNet experiments? Can the authors produce a table similar to Table 2 for ImageNet to show the improvements of the proposed TLLM scheme?\n\n3) The Baseline performance of MobileNetV2 in Table 3 (ImageNet) seems to be very low. top1 accuracy should be ~72.154% whereas the reported accuracy is only 68.900%. Please clarify. Link to public pytorch models with top1 accuracies: https://pytorch.org/vision/stable/models.html\n\n4) If the authors use label smoothing for undistillable classes (Lines 66-68) in the student, it is not reasonable to compare with reported Baselines as label smoothing will further improve accuracy (by alleviating models’ overconfidence [1, 2, 3]). A more reasonable comparison will be to include the accuracies of students trained with label smoothing. Can you report these results for Tables 2, 3. I suspect that a noticeable amount of improvement can be obtained by using label smoothing, especially if undistillable classes are semantically similar classes (See [2, 3]). \n\n5) CUB200 KD results are missing (Not available in Supplementary as well). \n\nThis is an interesting paper. In my assessment, I feel that the weaknesses of this paper outweigh the strengths. But I’m happy to change my opinion based on the rebuttal. \n\n\n====================\n\n**Post-Rebuttal:**\n\nThank you authors for the great effort on the rebuttal. \n\nI have increased my recommendation. Authors have addressed most of my concerns although concrete understanding of undistillable classes still remains unclear. \n\nAs discussed, please include details on the effect of label smoothing on the proposed TLLM framework.\n\n==================\n\n[1] Müller, Rafael, Simon Kornblith, and Geoffrey E. Hinton. \"When does label smoothing help?.\" Advances in neural information processing systems 32 (2019).\n\n[2] Shen, Z., Liu, Z., Xu, D., Chen, Z., Cheng, K. T., & Savvides, M. (2021). Is label smoothing truly incompatible with knowledge distillation: An empirical study. In ICLR\n\n[3] Chandrasegaran, K., Tran, N. T., Zhao, Y., & Cheung, N. M. (2022). Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing?. ICML\n Please see Weaknesses section above for a list of questions. Further please consider answering the following questions:\n\n1) What is the final optimization / loss function (preferably in equation form)?\n2) What is the mixture parameter ($\\alpha$) used for label smoothing in undistillable classes?\n Limitations / potential societal impacts discussed in Supplementary Section A." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "osaeg2lotp", "z3Ay6p2EfjE", "PRnwyIa_SJO", "1tSSeS72JBH", "C2acSNOB3M2", "T5LLweSo3DJ", "cWh8Hv4KR4", "jszy-HpoXIL", "XvvhBJP9XJ8", "nips_2022_q6bZruC3dWJ", "cWh8Hv4KR4", "jszy-HpoXIL", "Xcpf0lKynj", "XvvhBJP9XJ8", "XvvhBJP9XJ8", "nips_2022_q6bZruC3dWJ", "nips_2022_q6bZruC3dWJ", "nips_2022_q6bZruC3dWJ", "nips_2022_q6bZruC3dWJ" ]
nips_2022_xnuN2vGmZA0
VITA: Video Instance Segmentation via Object Token Association
We introduce a novel paradigm for offline Video Instance Segmentation (VIS), based on the hypothesis that explicit object-oriented information can be a strong clue for understanding the context of the entire sequence. To this end, we propose VITA, a simple structure built on top of an off-the-shelf Transformer-based image instance segmentation model. Specifically, we use an image object detector as a means of distilling object-specific contexts into object tokens. VITA accomplishes video-level understanding by associating frame-level object tokens without using spatio-temporal backbone features. By effectively building relationships between objects using the condensed information, VITA achieves the state-of-the-art on VIS benchmarks with a ResNet-50 backbone: 49.8 AP, 45.7 AP on YouTube-VIS 2019 & 2021, and 19.6 AP on OVIS. Moreover, thanks to its object token-based structure that is disjoint from the backbone features, VITA shows several practical advantages that previous offline VIS methods have not explored - handling long and high-resolution videos with a common GPU, and freezing a frame-level detector trained on image domain. Code is available at the link.
Accept
All four reviewers are positive about this work (with three Accept and one Weak Accept). All reviewers appreciate the clear writing, solid results, and the idea of using local attentions in transformers to associate object token extracted at each frame. During the discussion phase, the authors further clarified some of the questions and present additional results (e.g., limitations and ablation on frozen/finetuned detector). After reading the reviews and the rebuttal, the AC agrees with the reviewers that this is a solid work with strong results on video instance segmentation. The AC thus recommends to accept.
train
[ "fJIzT5yHN-y", "Ui1DeBuQy4h", "UgmsQgvmobU", "G2KPFKBJvqK", "6gJMQC2h9A2", "R3s4AIiq-zQ", "vWE9Fr-lorF", "iUeXUGnWaZ", "rBa5PpN3JyR", "W8wZLk1D312", "qgiwLeLGL4p", "LT3DgPfS7CL" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the author's response. They try to address all the concerns. I agree with the authors that performance wises it's a new benchmark across all datasets. However, I till think its two-stage processing(1st token extraction,2nd main network) is quite burdensome and stronger features from mask2former help its performance. As it is a really well-written paper with a strong performance but lacks on the earlier points, I will give a weak acceptance. ", " Thank you for the reply, it has solved my concerns, I'll keep the score, \"Accept\".", " We greatly appreciate your decision to increase the score. We will try to revise the paper to include the insightful ideas that you shared in the comments. Thank you.", " After reading the rebuttal, I decided to give \"Accept\". \n\nIn addition, please kindly cite the following paper, which is a recent VIS paper using novel spatio-temporal contrastive learning for instance association.\n\n[1] Jiang, Zhengkai, et al. \"STC: Spatio-Temporal Contrastive Learning for Video Instance Segmentation.\" arXiv preprint arXiv:2202.03747 (2022).", " We sincerely appreciate the reviewer for the thoughtful review and constructive feedback. We are pleased to see that the reviewer acknowledged our work “introduces novel and efficient ideas”, “is well written and easy to follow”, “demonstrates effectiveness on benchmarks”, and “provides a new perspective for long-range video understanding”. Our answers to the questions are as follows.\n\n---\n\n**Q1. Unfair comparison with related works, especially SeqFormer.**\n\nWe attached SeqFormer [28] on top of Mask2Former [7] detector (Mask2Former+Seqformer), and we provide a quantitative comparison between Mask2Former-VIS, Mask2Former+Seqformer, and VITA. The models are trained and evaluated on YouTube-VIS 2019 dataset without the use of pseudo videos.\n\n|Method |AP|AP$_{50}$|AP$_{75}$|AR$_{1}$|AR$_{10}$|\n|:--- |:---:|:---:|:---:|:---:|:---:|\n|Mask2Former-VIS [6] |46.4|68.0|50.0|46.4|56.4|\n|Mask2Former + SeqFormer [7, 28]|47.4|71.0|52.5|46.5|57.0|\n|VITA (Ours) |48.1|69.4|52.9|48.3|60.3|\n\n**Q2. Suggestion of removing redundant queries.**\n\nWe greatly thank the reviewer for sharing the interesting idea. The pruning strategy is as follows:\n\n1. The “background” probability ***p*** of all frame queries ***TN$_f$*** are measured.\n2. ***p*** is sorted in ascending order.\n3. For each frame, only top ***rN$_f$*** queries are sampled, where ***r*** **\\in (0, 1]** is a ratio. (***p*** **=** ***p*** **[ :** ***rN$_f$*** **]**)\n\nThe accuracy per ratio ***r*** is as shown in the following table:\n\n|***r***|AP |AP$_{50}$|AP$_{75}$|AR$_{1}$|AR$_{10}$|\n|:--- |:---: |:---: |:---: |:---: |:---:|\n|1.0 |49.8 |72.6 |54.5 |49.4 |61.0|\n|0.75 |49.7 |72.5 |54.4 |48.7 |61.0|\n|0.5 |48.9 |72.1 |52.0 |48.3 |60.9|\n|0.25 |48.1 |71.6 |51.6 |47.4 |59.8|\n\nFrom the reduction of the number of the frame queries, VITA can process much more frames as the quadratic computation in Clip Encoder can be alleviated. By setting the ratio ***r*** **= 0.75**, the maximum frame number that VITA can process increases **from 1392 (Table. 4) to 2635**. Meanwhile, the accuracy degradation is marginal (**-0.1 AP**), which signifies that VITA focuses more on the foreground context. We believe exploring sampling strategies of the queries can be an interesting future direction.\n\n**Q3. “Why not provide YouTube-VIS 2022 experimental results?”**\n\nThanks for the comment. We focused more on existing datasets because the release date of YouTube-VIS 2022 was very close to the NeurIPS paper submission date. Here, we are glad to share VITA’s accuracy (using a ResNet-50 backbone) on YouTube-VIS 2022 validation set, and we will add more results in revision.\n\n|AP |AP$_{50}$|AP$_{75}$|AR$_{1}$|AR$_{10}$|\n|:---: |:---: |:---: |:---: |:---:|\n|39.1 |60.6 |44.3 |35.6 |48.1|", " We thank the reviewer for recognizing our work “achieves state-of-the-art performance”, “effectively reduces running memory”, “is able to run hundreds of frames”, “has faster convergence”, and “makes an appealing choice for model development”. We appreciate your valuable comments, and we will respond as follows.\n\n---\n\n**Q1. Missing references.**\n\nThanks for your kind reminder. We shall cite the paper in revision.\n\n**Q2. Potential to online VIS.**\n\nWe appreciate your insightful comment. We believe there is plenty of room for variants of VITA for online inference, and an initial idea could be recurrently updating clip queries using RNNs.\n\n**Q3. Impact of the number of object tokens on the performance.**\n\nN$_{f}$ was set to 100 as the frame-level detector Mask2Former uses 100 queries by default. Please kindly refer to the experimental results in Reviewer-KReH’s section, which analyses the effects of pruning object tokens. Even if we reduce N$_f$ from 100 to 75, the accuracy loss is marginal (-0.1 AP) while the maximum number of frames that VITA can process greatly increases from 1392 to 2635.", " We appreciate your reviews. We are encouraged that you find our work “shows state-of-the-art performance on challenging datasets”, “is the first offline method can run an evaluation on OVIS which consists of long videos”, “demonstrates sufficient experiments and ablations”, “is well written and easy to understand”, and “presents the method with interpretable figures”. Here are our answers to the reviewer’s concerns and questions.\n\n---\n\n**Q1. Lack of novelty.**\n\nWe would like to note that our original contribution is to propose a new paradigm of complete-offline Video Instance Segmentation (i.e., *video-in and video-out*). While existing methods have achieved powerful performance on benchmarks, they lack applications to long and high-resolution videos. Some methods tackle the efficient design, but the inherent property of carrying dense spatio-temporal features leads to limited scalability. Therefore, our motivation is to provide the community with new insights that break the convention. As discussed in ablations (Table. 4 - 6), our proposal brings various choices - inferring long and high-resolution videos, faster convergence when training, and better practicality with a frozen detector.\n\n**Q2. The claim of fitting long-range video.**\n\nUnder the complete-offline manner, we first extract the frame-level object queries and mask features by visiting the frame-level image instance segmentation model (Mask2Former [7]). What differs the most from previous works is that VITA can discard heavy intermediate spatio-temporal features. This is because the input of VITA is only a set of frame queries. On the contrary, previous works have to keep dense spatio-temporal features as the features are used for decoding information into object queries. We will add more clarification on this in revision.\n\n**Q3. Qualitative analysis.**\n\nThank you for the constructive comment. Please kindly refer to Reviewer-56An’s section.\n\n**Q4. Object tokens during the evaluation phase.**\n\nWe use entire object tokens decoded by a frame-level object detector during the training and evaluation phases. There is no heuristic to filter it out. As other reviewers (4omf and KReH) also pointed out, it is certainly meaningful to investigate the performance when only a part of object tokens remains. Please kindly refer to the experimental results in Reviewer-KReH’s section, which analyses the effects of pruning object tokens. Even if we reduce N$_{f}$ from 100 to 75, the accuracy loss is marginal (-0.1 AP) while the maximum number of frames that VITA can process greatly increases from 1392 to 2635.\n\n**Q5. Performance improvement on OVIS.**\n\nThank you for the comment. OVIS (Occluded Video Instance Segmentation), as the name suggests, basically has high occlusion between instances. In addition, as we discussed in section 4.1 in the paper, we observed that OVIS has other three distinct challenges compared to YouTubeVIS. 1) much more instances appear in a single video. 2) the instances with the same categories in the same video have almost similar appearances. 3) much longer video length. While the standard VIS metrics comprehensively reflect these challenges, we suppose that the proposed method achieves high performance on OVIS for the following reason: explicit utilization of object-centric information. We learned a lesson from previous offline methods that building communication between frames is crucial to improving the performance of VIS. The context exchange through memory tokens (e.g., IFC [14]) leads to good performance on YouTubeVIS, which has relatively monotonous motions with few instances. However, IFC may struggle on videos with many instances such as OVIS. We believe that VITA’s object-centric information exchange throughout a video helps to better deal with the challenges in OVIS.\n\n**Q6. FPS comparison.**\n\nThank you for the question. We measured FPS using V100 GPU with a ResNet-50 backbone. Compared to Mask2Former-VIS [6], which runs at 48.2 FPS, VITA achieves 46.1 FPS. As the VITA module only uses frame queries instead of the whole spatio-temporal features, it shows only marginal speed degradation of 2.1 FPS.\n\n**Limitation.**\n\nWe appreciate the feedback, please kindly refer to Reviewer-56An’s section. We shall add the limitation section in the revision.\n", " We thank the reviewer for acknowledging our work and providing helpful comments. We appreciate the remarks that our work “is well written and easy to understand”, “has good and reasonable motivations”, and “shows good results with a simple implementation”. We will consider the reviewer’s concerns and questions below.\n\n---\n\n**Q1. Results for Mask2Former on OVIS in Table 2.**\n\nWe agree that it is important to compare our method against the state-of-the-art method - Mask2Former. However, we would like to point out that Mask2Former did not officially provide performance for OVIS on paper or in the GitHub repository. We believe this is because it is not possible for Mask2Former to infer the OVIS validation set on top of mainstream GPUs, as the dataset has much longer videos than YouTube-VIS. Therefore, we trained Mask2Former on OVIS by following their official training recipe of YouTube-VIS and we provide the results by running the model using CPU.\n\n|Method|AP|AP$_{50}$|AP$_{75}$|AR$_{1}$|AR$_{10}$|\n|:---|:---:|:---:|:---:|:---:|:---:|\n|Mask2Former-VIS (CPU) [6]|13.6|31.4|10.8|8.8|22.6|\n\n**Q2. Thorough analysis of whether VITA can recover from errors in the frame-level object detector.**\n\nWe appreciate the suggestion and agree that analyzing such scenarios would be essential to substantiate the proposed approach.\n* **Qualitative analysis:** Please kindly refer to **Figure. 1 - 2** in **rebuttal_qualitative_results.pdf** in the supplementary materials. We present the predictions of the frame-level object detector and VITA. Each video sample consists of two rows, the top row is the result of the frame-level object detector and the bottom row is the result of VITA. As shown in the results, while the frame-level detector fails to recognize either category or mask of instances that have been largely occluded, our method successfully recovers it by leveraging the temporal information.\n* **Quantitative analysis:** The one way to measure this is to convert the YouTubeVIS validation set into the frame-level annotations, then compare the frame-level AP of the frame-level detector and VITA using COCO API. Unfortunately, there is no publicly available ground truth for the validation set. Although it is not possible to make a concrete comparison, the effect can be demonstrated by restraining exchange of information between different frames in Clip Encoder: setting the window size to *W*=1, AP drops by 1.0 AP.\n\n**Q3. AP gap between end-to-end vs. frozen object detector.**\n\nCompared to COCO, YouTube-VIS dataset is annotated with only a few salient objects in videos as foregrounds. Having weights of the frame-level detector frozen to COCO, the detector cannot adapt to the YouTube-VIS domain and it embeds and interprets more objects in scenes as foregrounds. Therefore, VITA outputs more predictions as a foreground category even if such predictions are not labeled as ground-truths in YouTube-VIS. Please kindly refer to **Figure. 3** in **rebuttal_qualitative_results.pdf** in the supplementary materials. The phenomenon is shown in the following table: the number of outputs that VITA predicts as a foreground category per each threshold.\n\n|Threshold |0.1 |0.25 |0.5 |0.75|\n|:--- |:---: |:---: |:---: |:---:|\n|Frozen Detector|1116 |855 |674 |530|\n|End-to-End |653 |579 |542 |504|\n\nAs a result, it leads to a lower average precision (AP = ***TP*** / (***TP*** + ***FP***)) as it comes out with more false positive predictions. On the contrary, the more false positive predictions only slightly affect AR by the definition: ***TP*** / (***TP*** + ***FN***).\n\n**Q4. Table 6 Benchmark.**\n\nWe appreciate your kind reminder. The benchmark is YouTube-VIS 2019 and we shall add it to the caption in revision.\n\n**Limitation.**\n\nVITA has two major limitations for the ultimate long video understanding. First, the current architecture still has limitations in processing an infinite number of frames. In addition, since object tokens do not explicitly utilize temporal information, they may have difficulties in identifying complex behaviors that span over very long sequences. We believe that devising explicit designs to address these issues will be a promising future direction. We shall add this into the limitation section in the revision.\n", " - This paper considers the problem of video instance segmentation through associating the object-centric representation from frame-wise detector.\n\n- The proposed idea uses Transformer (local attentions) to associate the frame-wise object predictions.\n\n- The idea is validated on different VIS benchmarks, e.g. YTB-VIS2019, 2021, and OVIS, showing good results. - strengths \n - the paper writing is good, easy to understand.\n - the motivation of building on pre-trained frame wise object detector is good, which enables the model to handle long and high-resolution videos with customer-level GPUs. \n - very good results have been obtained.\n\n- weakness\n - in Table 2, what is the results for Mask2Former on OVIS, they have released model, so I believe it's possible to benchmark these numbers.\n - the analysis can be done more thoroughly, for example, it would be interesting to see, what if the frame-wise object detector fails on some intermediate frames, how much VITA can recover it.\n - not much to say on the weakness.\n\nI generally like this paper, for its reasonable motivation, simple implementation, and good results. - Numbers in Table2 can be filled in.\n\n- in Table 6, for frozen detector, what benchmark this is ? VIS-2019 ? please put it in caption. \n\n- if only 20 classes overlap, the recall remains quite high, especially on Swin-L, and this is only AR_10, I imagine AR_100 would be even closer, as you are using 100 queries, so in this case, why the gap on AP is quite large ? for example, 63 vs. 53.4.\n\n- how much VITA can recover the failed prediction from frame-wise predictions ? - There is broader impact, but no limitation section has been provided.", " The authors applied Mask2former to generate frame-wise object tokens as a pre-processing of a video. Afterward, the sliding attention mechanism from Swin Transformer is applied to aggregate all frame-wise understanding for Video Instance Segmentation. The model trains with the losses from DETR and MaskTrackRCNN. The ensemble achieves a state-of-the-art result on Youtube-VIS \nand OVIS. This is the first offline method applied to the long video dataset OVIS. Strengths:\n1. The method performs state-of-the-art offline video instance segmentation on two challenging datasets. \n2. This is the first time an offline method can run an evaluation on the long video dataset OVIS. \n3. The experiments and ablations are sufficient. The writing is good and easy to understand. The method is described well with interpretable figures. \n\nWeaknesses: \n1. The primary weakness of this paper is the lack of novelty. It is an ensemble of two prior works, i.e., Mask2Former, and Swin \nTransformer. Both of the prior methods have significant performance in their respective objectives. Hence the original contribution of this work is limited.\n2. The authors claim to fit long videos in a single GPU, enabling a long-range offline VIS method. But the Mask2Former features are extracted beforehand. Only the object representation from this prior work is processed in a single GPU. So the claim of fitting \nlong-range video is misleading.\n3. There is no qualitative analysis of the model's performance. It is unclear how or why the modules improve performance in some instances of the problem. 1. How were the object tokens calculated during the evaluation phase?\n2. Is one GPU enough to fit the video into Mask2Former and the proposed method?\n3. Why did the method improve performance for the challenging dataset OVIS?\n4. What is the FPS of the method compared to prior work? 1. The authors claim to have written the limitation in section 5, but it is not the case.", " This paper presents a memory efficient offline video instance segmentation method named VITA. It utilizes object tokens in each frame as condensed frame features which also embed object-level information. On the video level, it collects all the object tokens and use video queries to generate object sequences. The object sequences are multiplied with the frame features to generated object masks similar to MaskFormer work. Strength:\n-\tThe proposed VITA framework achieved state-of-the-art performance on YouTubeVIS and OVIS.\n-\tCompared to previous transformer-based methods, the proposed method effectively reduces running memory with the image features – object tokens – object sequences pipeline. It is able to run hundreds of frames in a single forward process while existing methods can only accommodate a few dozens.\n-\tThe proposed method has faster convergence compared to existing baseline such as Mask2Former-VIS (Figure 5). Making it an appealing choice for model development.\nWeaknesses:\n-\tSome missing references:\nCompFeat: Comprehensive Feature Aggregation for Video Instance Segmentation AAAI 2021\n Other questions:\n-\tAny potential of adapting this method to online VIS? \n-\tHow to determine the number of N_f in the model? Does it have an impact on the performance?\n Yes.", " This paper introduces a novel paradigm for offline video instance segmentation tasks through explicitly modeling object-oriented queries. Specifically, this paper uses an image detector to distill object-specific contexts into object tokens, then accomplishes video-level understanding by associating frame-level object tokens with an extra object encoder. Experiments on YouTuve-VIS-2019, YouTuve-VIS-2021, and OVIS demonstrate the effectiveness of the proposed method. Strengths:\n+ Using object tokens to directly model spatio-temporal is novel and efficient.\n+ This paper is well written and easy to follow. In addition, the authors also provide the code to reproduce.\n+ Experiments on the VIS benchmark demonstrate the effectiveness of the proposed method.\n+ Long-range video understanding is a crucial topic in video object detection, multi-object tracking, and video instance segmentation. This paper provides a new perspective for such areas.\n\nWeakness:\n+ Since this paper uses mask2former as image instance segmentation, thus it is not fair to compare with the previous SOTA method SeqFormer. I would like to see the results using image instance segmentation as the same as SeqFormer.\n+ Using all image queries may be redundant. So, I suggest abandoning background queries and only using foreground queries for such video instance segmentation tasks. The number of different frames may not be equal. Thus, an alternative way is to use part of object queries to ensure each frame has an equal number of queries.\n+ Why not provide YouTube-VIS-2022 experimental results?\n In general, I think this paper is good with clear writing, a well-explained approach, and good experimental results. The only concern is an unfair comparison with related works. There are no particular limitations in general. Please see the weakness above." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 5 ]
[ "vWE9Fr-lorF", "iUeXUGnWaZ", "G2KPFKBJvqK", "6gJMQC2h9A2", "LT3DgPfS7CL", "qgiwLeLGL4p", "W8wZLk1D312", "rBa5PpN3JyR", "nips_2022_xnuN2vGmZA0", "nips_2022_xnuN2vGmZA0", "nips_2022_xnuN2vGmZA0", "nips_2022_xnuN2vGmZA0" ]
nips_2022_kZnGYt-3f_X
Hilbert Distillation for Cross-Dimensionality Networks
3D convolutional neural networks have revealed superior performance in processing volumetric data such as video and medical imaging. However, the competitive performance by leveraging 3D networks results in huge computational costs, which are far beyond that of 2D networks. In this paper, we propose a novel Hilbert curve-based cross-dimensionality distillation approach that facilitates the knowledge of 3D networks to improve the performance of 2D networks. The proposed Hilbert Distillation (HD) method preserves the structural information via the Hilbert curve, which maps high-dimensional (>=2) representations to one-dimensional continuous space-filling curves. Since the distilled 2D networks are supervised by the curves converted from dimensionally heterogeneous 3D features, the 2D networks are given an informative view in terms of learning structural information embedded in well-trained high-dimensional representations. We further propose a Variable-length Hilbert Distillation (VHD) method to dynamically shorten the walking stride of the Hilbert curve in activation feature areas and lengthen the stride in context feature areas, forcing the 2D networks to pay more attention to learning from activation features. The proposed algorithm outperforms the current state-of-the-art distillation techniques adapted to cross-dimensionality distillation on two classification tasks. Moreover, the distilled 2D networks by the proposed method achieve competitive performance with the original 3D networks, indicating the lightweight distilled 2D networks could potentially be the substitution of cumbersome 3D networks in the real-world scenario.
Accept
This submission was reviewed by four reviewers. All reviewers provided detailed and informative reviews. During rebuttal, the authors actively submitted detailed rebuttals, which lead to improved evaluations by the reviewers with improved scores. Overall, this is an interesting paper and an accept is recommended.
test
[ "4xAsOzpbEA", "bTQnTg4wru", "aoVAnW4hFtB", "JJIe8Et3cOW8", "aPuWIimhu4", "V_QnDybtkxr", "B1aScn4H8bW", "MZC_HGi1SfI", "L0lXKROnXq", "xgW4gDCBPx", "mNV7mNhzAXw", "pmEiVv5ySy0", "kjkM-F58RDj", "SzLSXU_uQYJ", "9kCkd04JV9h", "NAgjGsmOj_R" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer iQbj,\n\nWe truly appreciate for recommending the interesting works[1, 2]. In the revision, we will discuss their contribution and highlight the difference with our work in the Related Works. Moreover, we will follow all your constructive comments to improve our paper. Please kindly let us know if you have more suggestions. We are very glad to take them further.\n\nBest regards,\n\nAuthors of Paper1270", " I checked the reviews and the author feedbacks, as well the updated manuscript. I feel the manuscript is improved from the firstly submitted one. With this understanding, I am increasing my score to 6. \n\nSome of the ideas here may be familiar to readers of the following papers [1, 2]. In my humble opinion, if these studies have any relevance to the topic at hand, it would be great if the authors would highlight them in the Related Work.\n\nReference:\n\n[1] Bootstrapping Semi-supervised Medical Image Segmentation with Anatomical-aware Contrastive Distillation\n\n[2] SimCVD: Simple contrastive voxel-wise representation distillation for semi-supervised medical image segmentation\n\n[3] Self-supervised Contrastive Cross-Modality Representation Learning for Spoken Question Answering\n\n[4] Momentum contrastive voxel-wise representation learning for semi-supervised volumetric medical image segmentation\n\n[5] Understanding Dimensional Collapse in Contrastive Self-supervised Learning", " Dear reviewer qHTA,\n\nMany thanks for improving your score. We will follow all reviewers' comments that provide very constructive and valuable improvements for our paper. If you have more suggestions, please kindly let us know. We are very glad to take them further.\n\nBest regards,\n\nAuthors of Paper1270", " Dear authors, \n\nPlease accept my apologies. My \"Mendeley Desktop\" app seems to remove the axis information, legend information, etc., from all the figures you provided in the paper. And surprisingly, this has happened only to your paper. This issue directly impacted my reviews as I could not understand your work (results) without this information. In any case, after reading other reviewers' comments and the response you provided to them, I am increasing my score. \n", " Many thanks for your valuable comments and constructive feedback! We will answer your concerns in the following.\n\n>Q1: Although the overall architecture is novel, its individual components are largely inspired by previous works [r1 - r4]. In my humble view, the work seems to be very similar to the previous work. The technical novelty should be emphasized.\n\nA1: Thank you for identifying the novelty of our architecture. In fact, Hilbert Space[1] and Hilbert Curve[2] is different in terms of concept and applications. A Hilbert Space is a real or complex inner product space, which is generalized by Euclidean space. The Hilbert Space belongs to the space theory and is commonly applied to mathematics and physics fields. The Hilbert Curve is a continuous fractal space-filling curve that gives a mapping between 1D and 2D (or higher) space, as a variant of the space-filling Peano curves[3]. The Hilbert Curve is essentially a dimensionality reduction method based on a space walk strategy, which is commonly applied in scheduling and signal processing. Therefore, Hilbert Curve and Hilbert Space differ from concept and application perspectives.\n\nThe works [r1, r2] mentioned by the reviewer focus on studying the feasibility and interpretability of knowledge distillation (KD) in the Hilbert Space, which are totally different from our method that utilizes the Hilbert Curve to cope with cross-dimensionality distillation. The works [r3, r4] study the interpretability of existing KD methods, which differ from our method that proposes a new distillation method for a specific scenario, 3D-to-2D distillation. \n\nWe argue that our individual components are novel. Compared with the previous works, the main technical contributions of this work are summarized as follows.\n \n1. We propose to adopt the construction process of the Hilbert Curve to help transfer dimensionally heterogeneous feature maps to an informative and distillable 1D representation effectively. To the best of our knowledge, this is the first work that facilitates the space-walk strategy (e.g., the Hilbert Curve) for efficiently Cross-Dimensionality distillation in the field. The reason why the method works, please refer the answer A5 of your question Q5.\n2. We further propose the Variable-length Hilbert Distillation to make the Hilbert Distillation pay more attention to distilling activation (a.k.a. important) features. The activation features are found by calculating gradients for all the objective classes. About the definition of activation features, please refer to the answer A3 of your question Q3.\n3. Although the cross-dimensionality distillation is of importance, the efficient solutions for the cross-dimensionality distillation problem have been rarely studied. The related research and discussions that can be referred to are very limited. This paper presents a superior approach for cross-dimensionality distillation by leveraging the theory of Hibert Curve. We believe our effective and reproductive approach can take the lead in coping with the significant and practical distillation problem.", " >Q2: Line # 169-170 “In reality, in the medical imaging task that 3D models are commonly applied, the spatial distribution of human organs are always fixed regardless of the data modality”. Why is the spatial distribution always fixed? Based on my understanding, the distribution of organs, especifically tumors, should share a high variance [r5 - r6, r12]. The authors should explore more medical examples instead of showing the special case. \n\nA2: Thanks for your constructive comment! We agree with you that the original description only represents partial medical examples. In this paper, the subsection 3.2.a (line #166-173) has clarified the adaptability of the proposed method. First, the method can naturally handle structurally invariable cases such as stable organ recognition in computerized tomography (CT) because the relative positions of the activation features are well aligned between the converted student and teacher 1D space. Moreover, the proposed method is able to deal with activation feature alignment problems in high variance distribution cases (video recognition) by appending an extra simple length-preserving fully connection layer. As stated in your comment that some medical examples also share the high variance distribution, we agree that more detailed discussion of this question in the subsection is necessary. Thus, we will add the discussion in the main body of the revised paper. Nevertheless, it is worth noting that this would not affect the adaptability and novelty of our method because our method will deal with these medical examples as the same as video recognition (please refer to our experiments for the video dataset).\n\n>Q3: Line # 156 - 158 “only partial feature maps are activated and crucial for the final task”. Could you show some visualization to demonstrate these.\n\nA3: Thanks for your advice. We have added some visualization in the appendix of the revision. Please check Fig. 6 in Appendix \"A. More Details About How Hilbert Distillation Works\" in the revised paper. The question you raised can be supported by the existing research about interpretability for convolutional neural networks [4, 5], which can find out the important (activation) features in feature maps. In addition, many previous works about interpretability and Attention mechanism demonstrate that only a fraction of a neural network plays a crucial role. The visualization in [5] can also give the intuition to understand the argument.\n\n>Q4: As shown in Tables, the magnitude of the improvement of the proposed distillation-based methods remain unclear.\n\nA4: Thanks for your advice. Following the the previous works [r7, 6], we have added the Average Relative Improvement (ARI) in the table 1 as listed in the following:\n\n| | ActivityNet | | | Large-COVID-19 | | |\n|-------------|-------------|-------------|----------------|-------------|-------------|---------|\n| | 3DResNet-50 | 3DResNet-50 | ARI (%) | 3DResNet-50 | 3DResNet-50 | ARI (%) | |\n| | ResNet-50 | VGG16 | | ResNet-50 | VGG16 | | |\n| Teacher | 71.34 | 71.34 | / | 90.15 | 90.15 | / | |\n| Student | 61.42 | 60.22 | / | 79.92 | 77.4 | / | |\n| KD | 62.30 | 61.45 | 184.59% | 82.08 | 82.37 | 110.51% | |\n| SP | 62.88 | 62.27 | 71.11% | 83.69 | 82.85 | 47.79% | |\n| PKT | 62.73 | 62.70 | 64.02% | 83.20 | 82.69 | 61.15% | |\n| RKD | 62.14 | 61.23 | 247.15% | 83.02 | 82.44 | 69.87% | |\n| CCKD | 62.78 | 61.88 | 98.65% | 83.19 | 82.70 | 61.27% | |\n| HD (ours) | 63.55 | 63.46 | 12.40% | 85.05 | 84.63 | 9.99% | |\n| VHD (ours) | 63.71 | 64.02 | / | 85.55 | 85.37 | / |\n\nThe value of ARI can be calculated as $ARI=\\frac{1}{M} \\sum_{i=1}^{M} \\frac{Acc_{VHD}^{i}-Acc_{BKD}^{i}}{Acc_{BKD}^{i}-Acc_{STU}^{i}} \\times 100 \\%$, where M is the number of different architecture combinations, and BKD and STU refer to the baseline methods and student network, respectively. In brief, ARI presents the magnitude of the improvement of the proposed VHD compared with the method located in that row. We will load this revision in the main body of the paper.", " >Q5: I am wondering whether the authors can provide a more clear picture of how the Hilbert distillation works. Although the results and the experiments look good, this is still the heart of the model, and it makes it hard for me to reason about the importance of the result and whether it is exhaustively defended? Do you have any theoretical proofs?\n\nA5: Thanks for the great question. We have provided two new figures in the appendix in the revision to depict how the method works. Please check Figs. 7 and 8 in Appendix \"A. More Details About How Hilbert Distillation Works\" in the revised paper. Fig. 7 shows the case of converting a 16 $\\times$ 16 2D space with two semantic pixel-level classes to 1D space using the Hilbert Curve. The pixels of class 2 indicated by the yellow area are still largely aggregated after finishing the mapping by walking along with the Hilbert Curve, which means the structural information is well preserved in the dimension reduction. Fig. 8 presents how the proposed Hilbert Distillation works from the perspective of activation features. As the illustrated 3D-to-2D distillation example on COVID classification, the Hilbert Curve can help aggregate lung features into the 1-dimensional space and preserve the feature continuity. Then the student features in the 2D lung area can directly learn from teacher features in the 3D lung area. \n\nThe core ability of Hilbert Distillation is from the trait that the Hilbert Curve can reduce the dimension as well as preserve the locality of data points (features). Benefiting from the curve's mapping rule, two data points which are close to each other in the space-filling curve are also close to each other in the original space. Some previous works (e.g., [7]) have analyzed this kind of clustering property. The Hilbert Curve is founded on artificial rules and is difficult to derive with a short piece of mathematical formulas though it has been widely applied in some specific areas in recent years. It is recommended to refer to [8] (2D dimension) and [9] (multi-dimension) if you need more information about the analysis of the Hilbert Curve. We would emphasize that we do not alter the theory of the Hilbert Curve. Our method consists of 1) skillfully adopting the Hilbert Curve to conduct distillation between cross-dimensionality networks; 2) dynamically changing the walking stride of the Hilbert Curve in activation areas to improve the distillation performance further.\n\n>Q6: Lack of comparing current state-of-the-art knowledge distillation based methods [r5 - r8].\n\nA6: Thanks for recommending the related SOTA works. If we understand right, the SOTA methods you recommended are actually [r7] - [r11]. We have added the experiments of [r8], [r10], and [r11] on ActivityNet and Large-COVID-19. Please check the table presented below. As most recent knowledge distillation based methods, the added methods only consider the 2D-to-2D distillation thereby they need the help of alignment functions as what we have analyzed in Line # 261 - 266. Our method outperforms previous approaches on 3D-to-2D distillation because we largely focus on preserving structural information of 3D feature maps for 2D student networks to learn. We truly appreciate for recommending the contrastive learning based methods [r7, r9]. The reason why they are not included in the experiment is because the student requires the same/augmented samples of the teacher to construct \"positive pair\", which is impossible in cross-dimensionality distillation as the input dimension is different. We agree that detailed discussion about the adaptability of contrastive learning based methods on 3D-to-2D distillation problems in our paper is necessary. We have added the discussion in the appendix of the revision. Please refer to Appendix \"C. The Adaptability of 2D-to-2D Distillation Methods on 3D-to-2D Distillation Problems\" in the revised paper.\n\n| | ActivityNet | Large-COVID-19 |\n|-----------------------|------------------|------------------|\n| HKD[r8] (with avg) | 61.85 $\\pm$ 0.26 | 82.03 $\\pm$ 0.45 |\n| HKD[r8] (with cnv) | 62.20 $\\pm$ 0.47 | 81.84 $\\pm$ 0.29 |\n| HKD[r8] (with max) | 62.13 $\\pm$ 0.19 | 82.31 $\\pm$ 0.41 |\n| RKD[r10] (with avg) | 62.37 $\\pm$ 0.33 | 82.50 $\\pm$ 0.37 |\n| RKD[r10] (with cnv) | 62.42 $\\pm$ 0.51 | 82.73 $\\pm$ 0.56 |\n| RKD[r10] (with max) | 61.97 $\\pm$ 0.48 | 82.76 $\\pm$ 0.39 |\n| ResKD[r11] (with avg) | 62.74 $\\pm$ 0.62 | 82.61 $\\pm$ 0.82 |\n| ResKD[r11] (with cnv) | 62.89 $\\pm$ 0.76 | 83.02 $\\pm$ 0.78 |\n| ResKD[r11] (with max) | 62.74 $\\pm$ 0.62 | 83.27 $\\pm$ 0.71 |\n| HD (ours) | 63.55 $\\pm$ 0.28 | 85.05 $\\pm$ 0.58 |\n| VHD (ours) | 63.71 $\\pm$ 0.63 | 85.55 $\\pm$ 0.72 |\n\nNote that this is a one-off table for this response. The results will be integrated into Fig.3 of the main body in the revised paper to keep the consistent style for comparable methods.", " >Q7: More benchmarks in computer vision and medical imaging domains should be included to demonstrate the robustness of the proposed method.\n\nA7: Thanks for your advice. We have continued some extra experiments on Kinetics-400 after the submission. Since they are not included in the original submission, we listed the table in the following:\n\n| Kinetics-400 | | |\n|----------------|------------------|------------------|\n| Teacher | 3DResNet-50 | 3DResNet-50 |\n| Student | ResNet-50 | VGG16 |\n| Teacher | 74.15 | 74.15 |\n| Student | 67.20 $\\pm$ 0.23 | 65.43 $\\pm$ 0.19 |\n| KD | 68.03 $\\pm$ 0.31 | 66.71 $\\pm$ 0.46 |\n| SP | 69.14 $\\pm$ 0.48 | 68.29 $\\pm$ 0.55 |\n| PKT | 68.37 $\\pm$ 0.29 | 67.35 $\\pm$ 0.41 |\n| AT (with avg) | 68.21 $\\pm$ 0.59 | 66.86 $\\pm$ 0.30 |\n| AT (with cnv) | 68.35 $\\pm$ 0.44 | 67.14 $\\pm$ 0.47 |\n| AT (with max) | 68.13 $\\pm$ 0.32 | 67.19 $\\pm$ 0.44 |\n| AFD (with avg) | 68.82 $\\pm$ 0.65 | 67.77 $\\pm$ 0.58 |\n| AFD (with cnv) | 69.59 $\\pm$ 1.02 | 67.90 $\\pm$ 0.81 |\n| AFD (with max) | 69.08 $\\pm$ 0.64 | 68.48 $\\pm$ 0.59 |\n| HD (ours) | 70.28 $\\pm$ 0.36 | 69.82 $\\pm$ 0.42 |\n| VHD (ours) | 70.91 $\\pm$ 0.85 | 70.40 $\\pm$ 0.73 |\n\nThe results demonstrated that our method still outperforms the best performance holder AFD and SP in the current completed experiments. We have added the results in the appendix of the revision. Please refer to Appendix \"D. More Benchmarks\" in the revised paper. We will also present the experiments on Kinetics-400 of the same scale as the existing ActivityNets experiment in the final version.\n\nReference\n\n[r1] Self-Distillation Amplifies Regularization in Hilbert Space\n\n[r2] Overcoming the Curse of Dimensionality in Neural Networks\n\n[r3] Does knowledge distillation really work?\n\n[r4] Comparing kullback-leibler divergence and mean squared error loss in knowledge distillation\n\n[r5] Tumor‐infiltrating lymphocytes are a marker for microsatellite instability in colorectal carcinoma\n\n[r6] Integrative genomic profiling of large-cell neuroendocrine carcinomas reveals distinct subtypes of high-grade neuroendocrine lung tumors\n\n[r7] Contrastive representation distillation\n\n[r8] Heterogeneous knowledge distillation using information flow modeling\n\n[r9] Contrastive multiview coding\n\n[r10] Residual knowledge distillation\n\n[r11] Reskd: Residual-guided knowledge distillation\n\n[r12] Novel transfer learning approach for medical imaging with limited labeled data\n\n[1] Levitan, B.M. (2001), \"Hilbert space\", Encyclopedia of Mathematics, EMS Press.\n\n[2] Über die stetige Abbildung einer Linie auf ein Flächenstück\n\n[3] Sur une courbe, qui remplit toute une aire plane\n\n[4] Grad-cam: Visual explanations from deep networks via gradient-based localization\n\n[5] Learning Deep Features for Discriminative Localization\n\n[6] Distilling Holistic Knowledge with Graph Neural Networks\n\n[7] Analysis of the Clustering Properties of the Hilbert Space-Filing Curve\n\n[8] Analysis of the Hilbert curve for representing two-dimensional space \n\n[9] On Multidimensional Curves with Hilbert Property", " >Q4: Only two -relatively small - datasets were used for the evaluation. As far as I understand the proposed method can be used for any video dataset.\n\nA4: Thanks for your advice. We have continued some extra experiments on Kinetics-400 after the submission. Since they are not included in the original submission, we listed the table in the following:\n\n| Kinetics-400 | | |\n|----------------|------------------|------------------|\n| Teacher | 3DResNet-50 | 3DResNet-50 |\n| Student | ResNet-50 | VGG16 |\n| Teacher | 74.15 | 74.15 |\n| Student | 67.20 $\\pm$ 0.23 | 65.43 $\\pm$ 0.19 |\n| KD | 68.03 $\\pm$ 0.31 | 66.71 $\\pm$ 0.46 |\n| SP | 69.14 $\\pm$ 0.48 | 68.29 $\\pm$ 0.55 |\n| PKT | 68.37 $\\pm$ 0.29 | 67.35 $\\pm$ 0.41 |\n| AT (with avg) | 68.21 $\\pm$ 0.59 | 66.86 $\\pm$ 0.30 |\n| AT (with cnv) | 68.35 $\\pm$ 0.44 | 67.14 $\\pm$ 0.47 |\n| AT (with max) | 68.13 $\\pm$ 0.32 | 67.19 $\\pm$ 0.44 |\n| AFD (with avg) | 68.82 $\\pm$ 0.65 | 67.77 $\\pm$ 0.58 |\n| AFD (with cnv) | 69.59 $\\pm$ 1.02 | 67.90 $\\pm$ 0.81 |\n| AFD (with max) | 69.08 $\\pm$ 0.64 | 68.48 $\\pm$ 0.59 |\n| HD (ours) | 70.28 $\\pm$ 0.36 | 69.82 $\\pm$ 0.42 |\n| VHD (ours) | 70.91 $\\pm$ 0.85 | 70.40 $\\pm$ 0.73 |\n\nThe results demonstrated that our method still outperforms the best performance holder AFD and SP in the current completed experiments. We have added the results in the appendix of the revision. Please refer to Appendix \"D. More Benchmarks\" in the revised paper. We will also present the experiments on Kinetics-400 of the same scale as the existing ActivityNets experiment in the final version.\n\n\nReference\n\n[1] Contrastive representation distillation\n\n[2] Contrastive multiview coding\n\n[3] Cross-Layer Distillation with Semantic Calibration", " Many thanks for your valuable comments and constructive feedback! We will answer your concerns in the following.\n\n>Q1: There are no comparisons with regular distillation approaches from larger 2D networks to smaller 2D networks. Only cross dimensional distillation is evaluated - if I have understood the experiments correctly.\n\nA1: Thanks for your comment. Since 3D-to-2D distillation problem has been rarely studied, the baselines in our experiments are actually 2D-to-2D distillation approaches. To the best of our knowledge, there is no efficient solution for the cross-dimensionality distillation problem so far. In our experiments, we employ the regular 2D-to-2D distillation approaches to adapt to 3D-to-2D scenarios for comparison by leveraging some extra operations such as alignment functions. When considering dealing with 3D-to-2D distillation, we categorize existing 2D-to-2D distillation methods into three classes, as follows:\n\n1. Traditional 2D-to-2D intermediate feature maps distillation methods, such as AT, FitNet, SKD, IFVD, and AFD in our experiment (Figure 3). Since they cannot to be applied directly due to the challenge of the dimensionality difference, we employ alignment functions (i.e., average pooling, max pooling, and convolution) to align the 3D feature maps into 2D representations to enable the 3D-to-2D distillation as described in Line # 261 - 266.\n2. Traditional 2D-to-2D relation knowledge distillation methods, such as PDK, PKT, CCDK, and SP in our experiment (Table 1). They can be directly applied to 3D-to-2D distillation because the calculation can be performed inside the model. We have discussed them in Line # 277- 281.\n3. Contrastive learning based distillation methods [1, 2]. The reason why they are not included in the experiment is because the student requires the same/augmented samples of the teacher to construct “positive pair”, which is impossible in cross-dimensionality distillation as the input dimension is different.\n\nWe agree that more detailed discussion about the adaptability of 2D-to-2D distillation methods on 3D-to-2D distillation problems in our paper is necessary. We have added the discussion in the appendix of the revision. Please refer to Appendix \"C. The Adaptability of 2D-to-2D Distillation Methods on 3D-to-2D Distillation Problems\" in the revised paper.\n\n>Q2: It is not clear how the layer that is used for extracting the features can affect the distillation. How should the layers be matched to each other?\n\nA2: Thanks for your question. We have provided the effectiveness of our methods with respect to feature maps extracted from different convolutional layers in the ResNet-like architecture in the initial submission. Please refer to Sec.4.4 \"Hyperparameter Tuning - Layer selection distillation\" (Line # 323 - 331), Figure 4(c), and Figure 4(d).\n\nBy leveraging the Hilbert mapping function, we can keep the information of the feature maps globally because the Hilbert curve is a surjective mapping function. Therefore, features of a student layer can receive valid information from any features in the teacher layer. However, choosing the layer pairs in the relatively same location (the former, the middle, or the latter) is recommended to reach the better distillation performance. Some recent works [3] have discussed the utilization of different blocks simultaneously. However, it is not the scope of our paper.\n\n>Q3: There is no discussion on the computational complexity of generating the Hilbert-based matching during the training.\n\nA3: Thanks for you advice. In fact, the computational cost from generating Hilbert Curve to finishing the Hilbert-based mapping function $\\mathcal{H}_{n,p}$ is very low. In the revision, we ran the generation processor on a single process of the Intel(R) Xeon(R) Silver 4216 CPU (2.10GHz) and calculated the computational costs on different sizes of feature maps. Results are presented as follows.\n\n| Time Consuming (ms) | | | | | | | | |\n|---------------------|-------|-------|-------|-------|-------|-------|-------|--------|\n| $S$ (side length) | 2 | 4 | 8 | 16 | 32 | 64 | 128 | 256 |\n| 2D (# points = $S^2$) | 0.034 | 0.046 | 0.090 | 0.204 | 0.458 | 2.505 | 2.319 | 5.166 |\n| 3D (# points = $S^3$) | 0.048 | 0.072 | 0.159 | 0.376 | 0.874 | 3.359 | 4.550 | 10.043 |\n\nWhat can be observed is that the mapping only costs 10ms even in processing a 3D feature map with the size 256 $\\times$ 256 $\\times$ 256. In reality, the time complexity of the generation process depends on the implementation of the Hilbert Curve. We adopt the most common approach to generate Hilbert Curve in $O(\\log{n})$ time. We agree that detailed discussion about the computational complexity of the generation process is necessary. Thus, we have added the discussion in the appendix of the revision. Please refer to Appendix \"B. Computational Costs of Generating Hilbert Mapping Function\" in the revised paper.", " Thanks for your valuable time. We have read your review carefully, and, unfortunately, found that there maybe existing some factual errors in this comment. We will answer your concerns and explain the potential misunderstanding in the following.\n\n>Q1: The major weakness is the poor writing and presentation of the paper. This not only made reading paper difficult but also made it impossible to understand the results and outcomes of the proposed method. Some highlights of this include: a. Poor and unclear figures: Fig 3, Fig 4 and Fig 5 is missing legends, X- and Y-axis. It is impossible to understand what's happening. b. Grammatical issues in the writing. Overall, the submission quality is far below the quality of NeurIPS submission. I strongly encourage authors to carefully proofread their paper before submission.\n\nA1: Thanks for your comment, but we hope you can double-check this problem because we believe there maybe existing factual errors in this comment. First, although there are some grammar errors and typos, all other reviewers comment our presentation is good/excellent. One of reviewers points out that this paper is well written and easy to follow. We will definitely carefully proofread our paper in the final revision. Second, we argue that your description \"Fig 3, Fig 4 and Fig 5 is missing legends, X- and Y-axis\" is a factual error. All the three figures have the legends, X- and Y-axis in our original submission. Although Fig. 5 does not have the explicit X-axis notation, we have described in the caption that this figure shows the straightforward experiments about the influence of alpha (in other words, it is the values of alpha on the X-asis).\n\n>Q2: The experimental setup also is confusing. Q2. a. Is ActivityNet also splitted randomly? The wordings in the description of two datasets raises the question. Q2. b. Also why different ratio of train-test split is done for two dataset? Q2. c. How are hyperparameter selected? e.g., alpha, epochs number, batch-size, etc.\n\nA2. (a): Yes, The ActivityNet was also split randomly. We missed the word \"randomly\" and have added it in the revised paper.\n\nA2. (b): We followed the previous works (e.g., [1]) that also use the ActivityNet dataset to split training and testing sets. The adopted split of train/test sets has been widely used in ActivityNet Challenge. For the Large-COVID-19 dataset, we allocated more images to training sets to achieve sufficient training due to the relatively low scale of medical imaging datasets.\n\nA3. (c): In fact, all the hyperparameter selection and settings have been provided in “Sec. 4.2 Experiment Setting - Network Architectures”. This section has detailed described how to select alpha, epochs number, batch size, etc. Please refer to Line # 251 - 257 if you missed it, and please read this section in our paper.\n\nWe hope the above responses can address your concerns and misunderstandings (if exists). We look forward to furthering discussions with you.\n\nReference\n\n[1] CBR-Net: Cascade Boundary Refinement Network for Action Detection: Submission to ActivityNet Challenge 2020 (Task 1).", " Thanks for your appreciation and constructive feedback! The computational cost from generating Hilbert Curve to finishing the Hilbert-based mapping function $\\mathcal{H}_{n,p}$ is very low. We run the generation process on a single process on the Intel(R) Xeon(R) Silver 4216 CPU (2.10GHz) and calculate the computational costs on different sizes of feature maps. Results are presented as follows.\n\n| Time Consuming (ms) | | | | | | | | |\n|---------------------|-------|-------|-------|-------|-------|-------|-------|--------|\n| $S$ (side length) | 2 | 4 | 8 | 16 | 32 | 64 | 128 | 256 |\n| 2D (# points = $S^2$) | 0.034 | 0.046 | 0.090 | 0.204 | 0.458 | 2.505 | 2.319 | 5.166 |\n| 3D (# points = $S^3$) | 0.048 | 0.072 | 0.159 | 0.376 | 0.874 | 3.359 | 4.550 | 10.043 |\n\nWhat can be observed is that the mapping only costs 10ms even in processing a 3D feature map with the size 256 $\\times$ 256 $\\times$ 256. The time complexity of the generation process depends on the implementation of the Hilbert Curve. We adopt the most applied approach to generate Hilbert Curve in $O(\\log{n})$ time. We agree that detailed discussion about the computational complexity of the generation process is necessary. We have added the discussion in the appendix of the revision. Please refer to Appendix \"B. Computational Costs of Generating Hilbert Mapping Function\" in the revised paper.", " The paper studies 3D knowledge distillation for video analysis and medical imaging via Hilbert curve. The authors propose a Hilbert Distillation (HD), to explicitly distill structural information in intermediate features of 3D models. The authors further design Variable-length Hilbert Distillation (VHD) to dynamically shorten the walking stride of the Hilbert curve. The authors demonstrate the robustness of the method compared with some state-of-the-art methods.\n ##### Strengths\n+ The paper proposes Hilbert Distillation (HD) to distill structural information from intermediate 3D feature maps.\n+ The authors introduce Variable-length Hilbert Distillation (VHD) to efficiently transfer structural knowledge via Hibert curves.\n+ The results are promising compared to some previous baselines.\n\n##### Weaknesses\n- The technical novelty of the method could be emphasized. \n- The usefulness and motivation of the Hilbert curve is not clear.\n- There are some claims that are not well supported with references, empirical or theoretical evidence (See “Question” - Q3)\n- Results should include state-of-the-art methods from the literature.\n- More benchmarks should be included to demonstrate the robustness of the proposed method.\n - Although the overall architecture is novel, its individual components are largely inspired by previous works [1 - 4]. In my humble view, the work seems to be very similar to the previous work. The technical novelty should be emphasized.\n- Line # 169-170 “In reality, in the medical imaging task that 3D models are commonly applied, the spatial distribution of human organs are always fixed regardless of the data modality”. Why is the spatial distribution always fixed? Based on my understanding, the distribution of organs, especifically tumors, should share a high variance [5 - 6, 12]. The authors should explore more medical examples instead of showing the special case.\n- Line # 156 - 158 “only partial feature maps are activated and crucial for the final task”. Could you show some visualization to demonstrate these.\n- As shown in Tables, the magnitude of the improvement of the proposed distillation-based methods remain unclear.\n- I am wondering whether the authors can provide a more clear picture of how the Hibert distillation works. Although the results and the experiments look good, this is still the heart of the model, and it makes it hard for me to reason about the importance of the result and whether it is exhaustively defended? Do you have any theoretical proofs?\n- Lack of comparing current state-of-the-art knowledge distillation based methods [5 - 8].\n- More benchmarks in computer vision and medical imaging domains should be included to demonstrate the robustness of the proposed method.\n\nReference\n[1] Self-Distillation Amplifies Regularization in Hilbert Space\n\n[2] Overcoming the Curse of Dimensionality in Neural Networks\n\n[3] Does knowledge distillation really work?\n\n[4] Comparing kullback-leibler divergence and mean squared error loss in knowledge distillation\n\n[5] Tumor‐infiltrating lymphocytes are a marker for microsatellite instability in colorectal carcinoma\n\n[6] Integrative genomic profiling of large-cell neuroendocrine carcinomas reveals distinct subtypes of high-grade neuroendocrine lung tumors\n\n[7] Contrastive representation distillation\n\n[8] Heterogeneous knowledge distillation using information flow modeling\n\n[9] Contrastive multiview coding\n\n[10] Residual knowledge distillation\n\n[11] Reskd: Residual-guided knowledge distillation\n\n[12] Novel transfer learning approach for medical imaging with limited labeled data\n\n See “Questions” above.\n", " This paper presents a cross-dimensionality distillation approach based on the use of Hilbert curves. This facilitated the distillation of 3D network to 2D networks, with a variable-length method to encourage the 2D network to attend to activation features. Experiments were performed on two distinct datasets, compared to a set of existing knowledge distillation methods. Clear margins of improvement were observed in both datasets. The presented adoption of Hilbert curves to enable cross-dimensionality distillation is interesting, elegant, and well rationalized. The novelty is high.\n\nThe experimentation was thorough, and the baseline models are representative of existing approaches. The results show significant improvements over all baseline models in both datasets. The ablation analyses were also thorough.\n\nI do not see notable weakness in the paper. One curiosity is regarding the computational cost of the construction of the Hilbert curve, and how does that affect the training of the model.\n A quick comment on the computation cost would be appreciated. There did not seem to be discussion of the limitation of the presented work. ", " The paper propose a Hebert curve-based approach to perform distillation of the large models into lightweight models. In specific, the paper proposes Variable-length Hilbert curve that if flexible for different activation maps from different inputs. This is different than vanilla Hilbert curve that offers hard mapping for a space of fixed scale. The efficacy of the methods is demonstrated in two separate benchmarking datasets where the proposed method achieve improved performance over existing methods. \n Strength:\n- The paper proposes a novel approach based on Hibert curve to perform distillation to facilitate the knowledge of 3D networks to improve the performance of 2D networks. The idea is interesting as it doesn't require computational scale as the existing methods.\n- The proposed Hilbert Distillation method is extended as Variable-length Hilbert Distillation (VHD) to provide more flexibilityin activation feature map and stride length. \n- The experimental evaluation demonstrate improved performance. \n\nWeakness:\n- The major weakness is the poor writing and presentation of the paper. This not only made reading paper difficult but also made it impossible to understand the results and outcomes of the proposed method. Some highlights of this include:\na. Poor and unclear figures: Fig 3, Fig 4 and Fig 5 is missing legends, X- and Y-axis. It is impossible to understand what's happening. \nb. Grammatical issues in the writing. \nOverall, the submission quality is far below the quality of NeurIPS submission. I strongly encourage authors to carefully proofread their paper before submission. \n\n- The experimental setup also is confusing. \na. Is ActivityNet also splitted randomly? The wordings in the description of two datasets raises the question.\nb. Also why different ratio of train-test split is done for two dataset?\nc. How are hyperparameter selected? e.g., alpha, epochs number, batch-size, etc.\n\n\n######### update ##########\nThere was technical issue on my end which impacted my initial reviews. After resolving this technical issue, I was able to understand the paper more clearly. With this understanding, I am increasing my score to 7. Check weakness section above. Check weakness section above.", " In this paper the authors present a cross-dimensional distillation approach. More specifically, they proposed to use the knowledge of 3D networks to improve the performance of 2D networks. TO this end, they propose to present the structural information that exists in the teacher model using Hilbert curves, which can map the high dimensional representations to one dimensional continuous filling curves. The student network is then supervised using this information. To further improve the performance of the proposed method they also propose a variable length variant of the proposed method which can dynamically shorten the walking stride of the Hilbert curve. The authors demonstrate that the proposed method outperforms the current distillation methods that can be used for cross-dimensionality distillation using two datasets. \nPositive aspects:\n\n- The idea proposed in this paper is interesting and makes sense. The paper is well written and easy to follow.\n- The experimental evaluation clearly demonstrates the improvements obtained using the proposed method. Ablation studies are also provided.\n\nNegative aspects:\n- There are no comparisons with regular distillation approaches from larger 2D networks to smaller 2D networks. Only cross dimensional distillation is evaluated - if I have understood the experiments correctly.\n- It is not clear how the layer that is used for extracting the features can affect the distillation. How should the layers be matched to each other?\n- There is no discussion on the computational complexity of generating the Hilbert-based matching during the training.\n- Only two -relatively small - datasets were used for the evaluation. As far as I understand the proposed method can be used for any video dataset.\n\n Generally I am positive regarding this paper, since it introduces a novel idea on how cross-dimensional distillation can be performed. However, there is clear room for improvements, e.g., compare with regular distillation from larger 2D networks to smaller ones, evaluate the effect of using different layers - different layer matching, discuss the computational complexity, extend the evaluation to other larger datasets. Authors discuss limitations of their work. However, there is no dedicated limitations section." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 4 ]
[ "bTQnTg4wru", "MZC_HGi1SfI", "JJIe8Et3cOW8", "mNV7mNhzAXw", "kjkM-F58RDj", "kjkM-F58RDj", "kjkM-F58RDj", "kjkM-F58RDj", "NAgjGsmOj_R", "NAgjGsmOj_R", "9kCkd04JV9h", "SzLSXU_uQYJ", "nips_2022_kZnGYt-3f_X", "nips_2022_kZnGYt-3f_X", "nips_2022_kZnGYt-3f_X", "nips_2022_kZnGYt-3f_X" ]
nips_2022_ucNDIDRNjjv
Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting
Transformers have shown great power in time series forecasting due to their global-range modeling ability. However, their performance can degenerate terribly on non-stationary real-world data in which the joint distribution changes over time. Previous studies primarily adopt stationarization to attenuate the non-stationarity of original series for better predictability. But the stationarized series deprived of inherent non-stationarity can be less instructive for real-world bursty events forecasting. This problem, termed over-stationarization in this paper, leads Transformers to generate indistinguishable temporal attentions for different series and impedes the predictive capability of deep models. To tackle the dilemma between series predictability and model capability, we propose Non-stationary Transformers as a generic framework with two interdependent modules: Series Stationarization and De-stationary Attention. Concretely, Series Stationarization unifies the statistics of each input and converts the output with restored statistics for better predictability. To address the over-stationarization problem, De-stationary Attention is devised to recover the intrinsic non-stationary information into temporal dependencies by approximating distinguishable attentions learned from raw series. Our Non-stationary Transformers framework consistently boosts mainstream Transformers by a large margin, which reduces MSE by 49.43% on Transformer, 47.34% on Informer, and 46.89% on Reformer, making them the state-of-the-art in time series forecasting. Code is available at this repository: https://github.com/thuml/Nonstationary_Transformers.
Accept
The paper introduces a transformer-based method for non-stationary time series forecasting. This research addresses a clear need, as acknowledged by the reviewers. Also, most reviewers found the method clearly described and the experiments compelling, demonstrating an improvement of the state of the art. The reviewers asked questions about the baselines, evaluation methods and ablation studies. They also made requests related to clarifying the wording and some of the theory. The authors put in significant effort in addressing the comments, offering detailed responses to every reviewer. Only one of the reviewers responded during the discussion period, and the response came very late in the discussion period. However, I read the authors' response and concluded that they adequately addressed most issues raised by the reviewers. As the model is in the Transformer space, and transformers have previously been shown to be state of the art on a number of tasks, I do not find it necessary to compare against other 'families' of methods. So I will consider that issue addressed as well.
train
[ "CaGagqU0g8", "ZGnFhvSMhFi", "jdNFG3_4HE-", "dUx7DRHqdLE", "94IuyuI18zb", "PTGARHYIonP", "-LEJWdNfdfwp", "XqzOc4ax-o4", "NQ5CEYNr_Zwt", "KgJDHX3UdH4", "eK7GcrQpL0", "dZWw2cXfdNA", "Z0ZJu4xocbH", "_dsLH_Pv5E3", "J694mijEMXu", "brW6RJvSmQ", "evEKk77-rUm", "iuRCL_LBw0tm", "khBkzzwZWh", "M1uwH5kVmmD", "0mKB71pMTsN", "cRsEi9uUaEf" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Q6: \"why do not consider other statistic features?\"**\n\n**(1) Our proposed Series Stationarization is powerful enough in enhancing the time series stationarity.** The comparison of ADF test statistic is shown as follows. Note that a smaller value of ADF Test Statistic means more likely to be stationarity. \n\n| ADF test statistic | Exchange | ILI | ETTm2 | Electricity | Traffic |\n| ----------------------------- | ---------- | ----------- | ----------- | ----------- | ----------- |\n| Raw data | -1.889 | -5.406 | -6.225 | -8.483 | -15.046 |\n| After Series Stationarization | **-9.937** | **-10.313** | **-33.485** | **-20.888** | **-18.946** |\n\n(2) The simple formulation in Series Stationarization can also benefit the derivation of De-stationary attention.\n\n(3) We would like to leave the more sophisticated normalization methods for Series Stationarization in future work, which is included in the conclusion of the revised paper.", " Thanks for your response and questions.\n\n**Q1: “It does not mean this setting is correct.”**\n\n(1) All the works that you mentioned follow the same setting as our paper.\n\nThe works [1,2,3,4,5] that the reviewer mentioned all follow the same setting as Informer, Autoformer, or our proposed Non-stationary Transformer. Thus, we think our setting is widely-used and a convention of this community.\n\n(2) Method comparison.\n\nIt is also notable that, all the works that the reviewer mentioned are concurrent to our paper, such as DLinear [1] (26 May arXiv 2022), DeepTIMe [2] (13 Jul arXiv 2022), FreDo [3] (24 May 2022 arXiv 2022). And none of these methods are officially published. \n\nBesides, we have provided the N-BEATS (linear model), and ARIMA (classic method) as our baselines. And following the review's original review, we have compared the latest model FEDformer and ETSformer.\n\n(3) **Our model is state-of-the-art in Transformer-based models.**\n\nI think Transformer is a more powerful model than the pure linear model, which has been verified in extensive areas. And introducing powerful models into time series can benefit the future work of this community, such as designing big models or combining them with other modalities. And our model is state-of-the-art in this method paradigm.\n\n(4) We have provided a completely new benchmark by comparing methods in the original space.\n\nIn the previous rebuttal, as the reviewer requested, we have provided the results in the original space. And in the original space, our Non-stationary Transformer can also outperform other baselines and bring further promotion.\n\n**Q2: \"The motivation is unclear.\" \"Why the proposed method cannot be applied to other architecture.\"**\n\nIn this paper, we delve for the first time into **the concrete effect of \"non-stationary time series\" in Transformers**. The concrete effect is that the learned temporal dependencies (attention in Transformers) will be less distinguishable, which is defined as \"over-stationarization\". This finding motivates us to focus on the attention design and propose De-stationary Attention.\n\nFor other architectures, such as MLP, RNN. \n\n- It is hard to visualize learned temporal dependencies, where we cannot find an explicit element to represent the temporal dependencies in these models. \n- The negative effect of \"non-stationarity\" in these models is under-explored. \n- All the theoretical analyses in our paper are based on self-attention. It is nontrivial to generalize these to MLP or RNNs.\n\nThus, we will leave the exploration of the non-stationarity for other architectures in future work.\n\n**Q3: I don't find the response to \"Why not remove the normalization before embedding layer and use the raw data for computation of attention score?\"**\n\nSorry for the unclearness. We have provided this in the previous rebuttal. Please see **Q1-(2)**:\n\n- $\\underline{\\text{only De-stationary Attention}}$ in the table means that \"remove the normalization and use the raw data for computation of De-stationation attention\".\n- $\\underline{\\text{vinalla Transformer}}$ in the table means that \"remove the normalization and use the raw data for computation of vanilla self-attention\".\n\n**Q4: \"Why the architecture need to digest these features in this way?\"**\n\nIt is notable that by specifying the over-stationarization problem as the less distinguishable attention problem, we have narrowed down our design space into the attention calculation mechanism. Some other methods for the attention calculation of our model are included in the $\\underline{\\text{Table 9 of supplementary materials}}$, such as only $\\tau$ and only $\\mathbf{\\Delta}$.\n\nTo further address the review's concern, we also conduct an experiment by reincorporating $\\mu$ and $\\sigma$ into the feed-forward layer. But since the effect of stationarization on the feed-forward layer is under-explored, this \"feed-forward\" reincorporation design may not be as well-motivated as our De-stationary Attention. In most cases of the new experimental results (see table below), our proposed De-stationary Attention achieves better performance. Hence it is a more optimal design with theoretical support.\n\n| ETTm2 (MSE\\|MAE) | Predict 96 | Predict 192 | Predict 336 | Predict 720 |\n| --------------------------------- | ---------------------- | ---------------------- | ---------------------- | ---------------------- |\n| Series Stationarization | 0.253 \\| 0.311 | 0.453 \\| 0.404 | 0.546 \\| 0.461 | 0.593 \\| 0.489 |\n| + Reincorporation on Feed Forward | 0.275 \\| 0.329 | 0.406 \\| 0.403 | 0.502 \\| 0.465 | 0.694 \\| 0.575 |\n| + De-stationary Attention (Ours) | **0.192** \\| **0.274** | **0.280** \\| **0.339** | **0.334** \\| **0.361** | **0.417** \\| **0.413** |\n\nThe results on the full six benchmarks can be found in the response to Q3 of Reviewer PhZA.\n\n\n", " Thanks authors for providing more experiments to show the effectiveness of the new normalization layer. \n\nHowever, I am not convinced for the statement \"significant improvement\" and \"fair comparison\" for this long sequence forecasting setting. Most of recent transformer based forecasting model claim significant improvement, with 50% on MSE. At the same time, you can also find some recent works [1,2,3] use very simple architecture (even linear model) and time series decomposition strategy to achieve better performance than the transformer based models. I understand that the author directly report the results from early work, but I think existing works lack of fair comparison to shallow model, statistical model, or even non-learnable baselines. As shown in [4], a simple replication of historical data can achieve comparable performance to Informer. Then, the question is do we really need the complicated transformer based model for long sequence forecasting? Is it a waste of effort for the community? Lastly, I understand that the authors follow the same experimental setting and report the normalized results, but it does not mean this setting is correct. From your results, we can also find the results would be quite different when using different evaluation metric.\n\nAside from these questions and focus on the context of transformer-based model and non-stationary problem, I think the major concern of this work is the motivation is unclear. First, I think the authors should clarify the definition of \"non-stationary\" which indicates the distribution shift in both training and test data, not the traditional definition for stochastic process. This problem is interesting and important, which should be agnostic to the model architecture. However, I am unclear why the proposed method cannot be applied for other architecture, like MLP, RNN? Why it is transformer specific solution? Additionally, I am not clear of the formal definition of non-stationary, and how the proposed normalization is motivated from it? Is there any generalization guarantee for the proposed normalization in this setting with data-dependent distribution? What would happen if only the test data contain non-stationary change? The proposed analysis in Sec 3.2 seem to approximate the original self-attention with normalized data. Then the question is how to guarantee the original self-attention can tackle the non-stationary problem well? \n\nFrom the technique perspective, the proposed normalization layer seems to be heuristic. I am not clear about the justification of this design. I do not find the response to my question \"why not remove the normalization before embedding layer and use the raw data for computation of attention score?\". It seems to be time series feature engineering and incorporate statistic features, like the mean, variance of the sequence. Why the architecture need to digest these features in this way? Besides, a straightforward extension is to use more statistic features, like the top solution in M4 competition [5]. Then, why do not consider other statistic features? Lastly, if the authors use the normalized data as the input, how to differentiate the effect of the data normalization and the layer normalization?\n\n\n[1] Are Transformers Effective for Time Series Forecasting?\n[2] DeepTIMe: Deep Time-Index Meta-Learning for Non-Stationary Time-Series Forecasting\n[3] FreDo: Frequency Domain-based Long-Term Time Series Forecasting\n[4] Historical Inertia: A Neglected but Powerful Baseline for Long Sequence Time-series Forecasting\n[5] FFORMA: Feature-based Forecast Model Averaging", " Dear Reviewer,\n\nWe kindly remind you that it only left a few days of the one-week Reviewer-author discussion period. So please kindly let us know if our response has addressed your concerns. We will be happy to deal with any further issues/questions.\n\nWe made every effort to address the concerns as you suggested:\n\n- We **verified the effectiveness of the normalization layer** from both literature and experimental aspects.\n- We reclarified the experiment setting, where **all the results are fairly compared in our original paper**. \n- We have **re-evaluated all main results with new metrics sMAPE and MASE**.\n- We added more advanced Transformers (ETSformer, FEDformer) as our baselines and further validated that **our design can further boost these advanced models**.\n- We have **included the hyper-parameter selection strategy** in the revised paper.\n\nThanks again for your dedication to reviewing our paper.", " Dear Reviewer,\n\nWe kindly remind you that it has been 4 days since the one-week Reviewer-author discussion began. So please kindly let us know if our response has addressed your concerns. We will be happy to deal with any additional issues/questions.\n\nFollowing your suggestion, we revised the paper in the following aspects:\n\n- We reclarified our motivation and **conducted as many experiments as we can to verify the design of De-stationary Attention**.\n- We add **two new baselines (LSSL and GRU)** and evaluate them on all benchmarks to validate the performance of our model. \n- We rephrased the paper to solve every detailed confusing issue and **simplified all the overcomplicated descriptions**.\n\nThanks again for your valuable review. Looking forward to your reply.", " We sincerely thank all the reviewers for their insightful reviews and valuable comments, which are instructive for us to improve our paper further.\n\nIn this paper, we explore time series forecasting from the perspective of stationarity. Unlike previous works that solely focus on the stationarization method itself, we are the first to notice the negative effect of direct stationarization and specify the \"over-stationarization\" problem. Based on the plain model analysis, we propose the Non-stationary Transformers as a general framework, which can improve the data predictability and avoid the over-stationarization problem simultaneously. Experimentally, our method consistently boosts Transformer and its variants remarkably (over 40% MSE reduction).\n\nThe reviewers generally held positive opinions of our paper, in that **our motivation “is clear”**, our writing **“is easy to follow”**and **“well organized”**, our experimental results show **“impressive” and “significantly improvement”**, our model **“is nice and elegant”** and our paper **“really brings great things to the table”**.\n\nThe reviewers also raised insightful and constructive concerns. We made every effort to address all the concerns by providing sufficient evidence and requested results. See the $\\underline{\\text{revised paper}}$ and $\\underline{\\text{supplementary materials}}$ for details. All updates are highlighted in blue and here is the summary of the revisions:\n\n* **Motivations (Reviewers PhZA):** We highlight the contribution of noticing the negative effect of stationarization and specifying it as the concrete over-stationarization problem. And by emphasizing the instruction of theoretical analysis and evaluating every possible design of De-stationary Attention, we illustrate our motivation in both theoretical and experimental aspects.\n* **Baselines and new metrics (Reviewers PhZA, q3Tk):** We enlarge our comparison from 9 to 13 baselines on all six benchmarks, covering the advanced recurrent models and concurrent works. Besides, we add the sMAPE and MASE as new metrics and re-evaluate all the main results. By doing great efforts to complete these comparisons, we verify that our method still achieves the best performance and good generality on new baselines and new metrics.\n* **Ablation study of model design (Reviewers PhZA, q3Tk, 8mJi):** We added a comprehensive ablation, including the effect and design of Series stationarization (replenish the Table 5 of main text), design space of De-stationary Attention (replenish the Table 9 of supplement). We believe that the revised paper covers every detailed ablation of the motivation, designs, performance, and insight analysis.\n* **Theoretical part statement (Reviewer ZUaG):** We reformulate some equations for clearness, remove unnecessary Gaussian distribution assumptions in the plain model analysis, and correct the false statement in distribution matching.\n* **Concept definition (Reviewer ZUaG):** For scientific rigor, we have updated all the usages of stationarity concepts in the revised paper. We rephrase the “stationarity” to “the degree of stationarity”, which is a measurement of how stable the data distribution is. And we further define “Stationarization/De-stationarization” as a method to change the degree of stationarity.\n\n\nThe valuable suggestions from reviewers are very helpful for us to revise the paper to a better shape. We'd be very happy to answer any further questions.\n", " **Q3:** The \"sliding window\" operation and calculation in Series Stationarization.\n\nThe \"sliding window\" design is a convention of the time series forecasting applications. For example, in deployment, the model input is $S$ time points. As time goes by, we will keep sliding the input segment to obtain the latest length-$S$ past series for forecasting. We use this description to illustrate that our normalization is conducted on an input segment, not the whole time series.\n\nThe detailed calculation of Series Stationarization is in the $\\underline{\\text{Algorithm 1 and Algorithm 2 of supplementary materials}}$. For the length-$S$ time series segment $\\mathbf{x}\\in\\mathbb{R}^{S\\times C}$ with $C$ dimension, the normalization is conducted to each dimension. Thus, both $\\mu_\\mathbf{x}$ and $\\sigma_\\mathbf{x}$ are vectors.\n\nSince both $\\mu_\\mathbf{x}$ and $\\sigma_\\mathbf{x}$ are vectors, we assume that each variable of time series $\\mathbf{x}$ shares the same variance for the convenience of derivation. This assumption is reasonable because the z-score normalization is widely used as the preprocessing of time series forecasting ($\\underline{\\text{lines 160-162 of main text}}$).\n\n**Q4:** Suggestions on storytelling.\n\nWe very much appreciate your great suggestions about the storytelling, which can make the technique much clear and easier to understand. While we do improve the other parts, we would like to maintain the motivation about the \"stationarization\" because of the following questions.\n\n- Our method is indeed motivated by the insights and analyses of the \"stationarity\" of time series. The over-stationarization problem is specified and supported by the ADF test statistic in $\\underline{\\text{Figure 4 of main text}}$. The ADF test statistic reflects that the previous model predictions do have larger degrees of stationarity than the ground truth. All these analyses guide us to think about the \"stationarity\" of time series.\n- To resolve the reviewer's concern about the misleading usage of the \"stationarity\", we have rephrased all the corresponding usages based on the \"degree of stationarity\", which is clearly defined in $\\underline{\\text{Q1}}$. The $\\underline{\\text{revised paper}}$ does not have misleading usages about the \"stationarity\" and \"non-stationarity\".\n- As you stated, we attempt to \"bring great things on the table\". And stationarity is an important element in time series analysis. We hope that our writing about \"stationarity\" can provide some guidance for future research on time series forecasting. Please let us know if the above improvement is still questionable in your view.\n\n\n\n**Q5:** Pretentious statement in title and conclusion.\n\nWe have updated the title as \"Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting\" and removed the \"impressive generality and performance of the proposed framework\" in the conclusion in the $\\underline{\\text{revised paper}}$.\n\n\n\n**Q6:** Writing issues.\n\nAll the following writing issues have been rephrased in the $\\underline{\\text{revised paper}}$.\n\n- We have polished the literature review about the time series stationarity in the Introduction.\n- The grammatical mistakes and semantic repetitions are resolved.\n- We have rephrased the less rigorous and wrong expressions. For example, normalization does not transform the distribution into Gaussian distribution; normalization cannot eliminate the statistics difference; matching the first two moments cannot make the time series follow the same distribution.\n- Since the Series Stationarization is the set of normalization and de-normalization, we use the upper case for Series Stationarization to represent the combination of these two modules.\n\n\n\n**Q7:** The descriptions about Figure 1.\n\nAs stated by the reviewer, the input data to the Transformer is the I, II, III chunks respectively. The second column is just zoom-ins for the data. For clearness, we have colored the background and added the text descriptions to the figure in the $\\underline{\\text{revised paper}}$.\n\nAs for the attention matrices, it is the visualization of the calculated attention maps. For a length-$S$ time series, the matrices are in the shape of $S\\times S$. Thus, the value in the $i$-th row and $j$-th column represents the normalized attention weight of $i$-th time point w.r.t. the $j$-th time point. \n\nWe would like to thank the reviewer's meticulous suggestions again. All the suggestions are included in our $\\underline{\\text{revised paper}}$.\n", " We would like to sincerely thank Reviewer ZUaG for providing a detailed review and insightful suggestions.\n\n**Q1:** Rephrase and specific the concepts of \"stationary\", \"stationarity\", \"relative stationarity\" and \"de-stationary\".\n\nThanks a lot for your suggestion with scientific rigor. We have updated all the usages of the above concepts in the $\\underline{\\text{revised paper}}$. In detail, we reclarify and define the following concepts.\n\n- The degree of stationarity: a value to measure the degree of distribution change in time series. Especially, in this paper, we adopt the ADF test statistic as the metric. A smaller ADF test statistic indicates a higher degree of stationarity, which means the distribution is more stable.\n- Stationarization: A method to increase the degree of stationarity.\n- De-stationarization/De-stationary: A method to decrease the degree of stationarity back for stationarized time series.\n- Relative stationarity in Figure 4: The ratio of the ADF test statistic between the ground truth time series and the model predictions.\n\nEspecially, for the stationarization, we do not attempt to make the raw time series completely stationary. What we try is to increase the degree of stationarity, that is making the time series distribution more stable.\n\nConcretely, for the normalization module, we agree that the normalization based on $\\mu$ and $\\sigma$ cannot make the raw data a stationary time series. But with our normalization module, the ADF test statistic of the time series is getting smaller, which means the time series distribution is more stable and the time series \"tends more to be stationary\". This verifies that our proposed normalization module is an effective design to increase the degree of stationarity.\n\n|ADF Test Statistic|Exchange|ILI|ETTm2|Electricity|Traffic|Weather|\n|-|-|-|-|-|-|-|\n|Raw data|-1.889|-5.406|-6.225|-8.483|-15.046|-26.661|\n|After our Normalization|-9.937|-10.313|-33.485|-20.888|-18.946|-35.010|\n\n**Q2:** Rephrase the theoretical part.\n\nFollowing the reviewer's suggestion, we have completely polished the theoretical part in the $\\underline{\\text{revised paper}}$. \n\n(1) Revised parts.\n\n- Formulations and descriptions for equations, especially the Hadamard product and transpose.\n- Reconsider and deliberate all the assumptions in our plain model analysis.\n\n(2) Unchanged parts, especially the linear property of $f$.\n\nAs mentioned by the reviewer, the linear property of $f$ will derive the form of De-stationary Attention in $\\underline{\\text{Equation 5 of main text}}$. We would like to emphasize that this formalization is highly instructive for our final design. \n\n- Note that there are many ways to reinject the normalization parameters back into attention, such as only $\\tau$, only $\\mathbf{\\Delta}$ or other combination ways. If we didn't have the formalization in $\\underline{\\text{Equation 5 of main text}}$, we would have to try plenty of designs. Thus, the linear property of $f$ motivates us with tractable design space. \n\n- Besides, we also provide an ablation study in $\\underline{\\text{Table 9 of supplementary materials}}$, which compares the only $\\tau$ and only $\\mathbf{\\Delta}$ designs. The results confirm that the formalization in $\\underline{\\text{Equation 5 of main text}}$ achieves the best performance, demonstrating that the derived formalization surpasses other designs without such a nice theoretical support.\n\nThus, in view of the instruction of linear property for our final design, we still hold the linear property assumption, which can also make the reader easier to understand our design in De-stationary Attention.\n\nBesides, we would like to keep the original descriptions in \"Approximate $\\sigma_{\\mathbf{x}}^2$ and $\\mathbf{K}\\mu_\\mathbf{Q}$\". Note that even in the linear model analysis, both $\\mathbf{K}$ and $\\mu_\\mathbf{Q}$ cannot be obtained directly without the parameter weights in $f$. Thus, we adopt the MLPs to approximate them and obtain $\\tau$ and $\\mathbf{\\Delta}$.", " **Q2:** The ablation study of utilizing pre-computed statistics.\n\nBased on the derivation in $\\underline{\\text{Equation 5 of main text}}$, the optimal values of $\\tau$ and $\\mathbf{\\Delta}$ are $\\sigma_{\\mathbf{x}}^2$ and $\\mathbf{K}\\mu_{\\mathbf{Q}}$ respectively. These parameters are **data-dependent and rely on the deep features.** Thus, we cannot pre-compute the ground truth statistics of $\\tau$ and $\\mathbf{\\Delta}$, that is why we adopt one MLP with current inputs to approximate them.\n\nTo further address the reviewer's concern, we create a new well-designed baseline as follows: \n\nWe still use the Non-stationary Transformer but add a well-trained parallel Transformer, where the former is with Series Stationarization and the latter inputs the non-stationarized raw data. For this new baseline, we calculate the \"optimal values\" of $\\tau$ and $\\mathbf{\\Delta}$ from the well-trained parallel Transformer and use the calculated $\\tau$ and $\\mathbf{\\Delta}$ to refine the attention. From the below results, we find that our original design surpasses the new baseline with a parallel Transformer, even though the latter is twice larger in parameter and computation cost.\n\n|Exchange (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.567 \\| 0.591 | 1.150 \\| 0.825 |1.792 \\| 1.084 | 2.191 \\| 1.159 |\n|+ Parallel Transformer|0.162 \\| 0.280| 0.228 \\| 0.350|**0.373** \\| **0.449**| 1.579 \\| 0.898|\n|+ Ours|**0.111** \\| **0.237**|**0.219** \\| **0.335**|0.421 \\| 0.476|**1.092** \\| **0.769**|\n\n|ILI (MSE\\|MAE)|Predict 24|Predict 36|Predict 48|Predict 60|\n|-|-|-|-|-|\n|Vanilla Transformer|4.748 \\| 1.430|4.671 \\| 1.430|4.994 \\| 1.482|5.041 \\| 1.499|\n|+ Parallel Transformer|3.426 \\| 1.193|3.826 \\| 1.247| 3.886 \\| 1.281| 3.324 \\| 1.195|\n|+ Ours|**2.294** \\| **0.945**|**1.825** \\| **0.848**|**2.010** \\| **0.900**| **2.178** \\| **0.963**|\n\n|ETTm2 (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.572 \\| 0.552| 1.161 \\| 0.793|1.209 \\| 0.842| 3.061 \\| 1.289|\n|+ Parallel Transformer|0.230 \\| 0.302| 0.336 \\| 0.357| 0.459 \\| 0.424| 0.547 \\| 0.475|\n|+ Ours|**0.192** \\| **0.274**| **0.280** \\| **0.339**|**0.334** \\| **0.361**| **0.417** \\| **0.413**|\n\n|Electricity (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.260 \\| 0.358| 0.266 \\| 0.367| 0.280 \\| 0.375|0.302 \\| 0.386|\n|+ Parallel Transformer|0.170 \\| 0.275 | 0.196 \\| 0.299| 0.226 \\| 0.320| 0.227 \\| 0.322|\n|+ Ours|**0.169** \\| **0.273**| **0.182** \\| **0.286**| **0.200** \\| **0.304**| **0.222** \\| **0.321**|\n\n|Traffic (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.647 \\| 0.357|0.649 \\| 0.356|0.667 \\| 0.364| 0.697 \\| 0.376|\n|+ Parallel Transformer|0.613 \\| 0.334| 0.627 \\| 0.343| 0.623 \\| 0.337| 0.663 \\| 0.357|\n|+ Ours|**0.612** \\| **0.338** | **0.613** \\| **0.340**| **0.618** \\| **0.328** | **0.653** \\| **0.355**|\n\n|Weather (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.395 \\| 0.427 | 0.619 \\| 0.560 | 0.689 \\| 0.594 | 0.926 \\| 0.710 |\n|+ Parallel Transformer|0.213 \\| 0.261|0.265 \\| 0.307| 0.332 \\| 0.351|0.478 \\| 0.436|\n|+ Ours| **0.173** \\| **0.223**|**0.245** \\| **0.285**|**0.321**\\| **0.338**|**0.414** \\| **0.410**|\n\n**Q3:** The standard deviation of the results.\n\nWe repeat each experiment three times with different random seeds. The standard deviations for the main results are provided in the $\\underline{\\text{Table 4 of supplementary materials}}$.\n\n\n**Q4:** Description of Figure 4.\n\nThanks for this valuable suggestion. We have added the information about the ADF test statistic in the $\\underline{\\text{revised paper}}$.", " Many thanks to Reviewer 8mJi for providing a detailed review and insightful questions. \n\n**Q1:** The difference between Series Stationarization technique and the Scale handling in DeepAR.\n\nFor a segment of time series $\\mathbf{x}\\in\\mathbb{R}^{T\\times C}$ with mean $\\mu_{\\mathbf{x}}\\in\\mathbb{R}^{1\\times C}$ and variance $\\sigma_{\\mathbf{x}}\\in\\mathbb{R}^{1\\times C}$, the normalization in Series Stationarization is $\\frac{\\mathbf{x}-\\mu_{\\mathbf{x}}}{\\sigma_{\\mathbf{x}}}$ and the normalization in Scale handling is $\\frac{\\mathbf{x}}{\\mu_{\\mathbf{x}}+1}$.\n\nWe demonstrate that the differences lie in the following two folds:\n\n(1) The differences in motivation and effectiveness. \n\nOur proposed Series Stationarization focuses on the time series stationarity, while the Scale handling attempts to balance the value ranges of multiple time series. The detailed comparison of the module effectiveness is shown as follows:\n\n- **Both designs can balance the value ranges of multiple time series.** Especially, Series Stationarization and Scale handling transform the mean of multiple time series into zero and almost one respectively. \n\n- **Our proposed Series Stationarization is more powerful in enhancing the time series stationarity,** thereby matching the problem that our paper addresses. The comparison of ADF test statistic is shown as follows. Note that a smaller value of ADF Test Statistic means more likely to be stationarity.\n\n|ADF test statistic|Exchange|ILI|ETTm2|Electricity|Traffic|Weather|\n|-|-|-|-|-|-|-|\n|Raw data|-1.889|-5.406|-6.225|-8.483|-15.046|-26.661|\n|After Scale handling|-1.915|-5.527|-6.435|-8.560|-15.025|-27.058|\n|After Series Stationarization|**-9.937**|**-10.313**|**-33.485**|**-20.888**|**-18.946**|**-35.010**|\n\nIn addition, as stated in the $\\underline{\\text{lines 135-137 of main text}}$, Series Stationarization can make the model equivariant to translational and scaling perturbance of time series, thereby benefiting non-stationary series forecasting. In contrast, the Scale handling does not maintain this equivariance.\n\n(2) Experimental comparison.\n\nTo further compare the effect of these two modules in predictive capability, we replace the Series Stationarization in Non-stationary Transformer as the Scale handling. Benefiting from the stronger capability in enhancing the time series stationarity, Series Stationarization generally outperforms the Scale handling.\n\n|Exchange (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.567 \\| 0.591 | 1.150 \\| 0.825 |1.792 \\| 1.084 | 2.191 \\| 1.159 |\n|+ Scale handling|0.237 \\| 0.380|0.516 \\| 0.576|0.737 \\| 0.706|1.413 \\| 1.009|\n|+ Series Stationarization (Ours)|**0.111** \\| **0.237**|**0.219** \\| **0.335**|**0.421** \\| **0.476**|**1.092** \\| **0.769**|\n\n|ILI (MSE\\|MAE)|Predict 24|Predict 36|Predict 48|Predict 60|\n|-|-|-|-|-|\n|Vanilla Transformer|4.748 \\| 1.430|4.671 \\| 1.430|4.994 \\| 1.482|5.041 \\| 1.499|\n|+ Scale handling|3.276 \\| 1.125|3.629 \\| 1.192|3.730 \\| 1.245|3.661 \\| 1.238|\n|+ Series Stationarization (Ours)|**2.294** \\| **0.945**|**1.825** \\| **0.848**|**2.010**\\| **0.900**| **2.178** \\| **0.963**|\n\n|ETTm2 (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.572 \\| 0.552| 1.161 \\| 0.793|1.209 \\| 0.842| 3.061 \\| 1.289|\n|+ Scale handling|0.379 \\| 0.462|0.919 \\| 0.762|1.875 \\| 1.140|3.832 \\| 1.509|\n|+ Series Stationarization (Ours)|**0.192** \\| **0.274**| **0.280** \\| **0.339**|**0.334** \\| **0.361**| **0.417** \\| **0.413**|\n\n|Electricity (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.260 \\| 0.358| 0.266 \\| 0.367| 0.280 \\| 0.375|0.302 \\| 0.386|\n|+ Scale handling|0.282 \\| 0.378|0.323 \\| 0.411|0.340 \\| 0.423|0.336 \\| 0.417|\n|+ Series Stationarization (Ours)|**0.169** \\| **0.273**| **0.182**\\| **0.286**| **0.200** \\| **0.304**| **0.222** \\| **0.321**|\n\n|Traffic (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.647 \\| 0.357|0.649 \\| 0.356|0.667 \\| 0.364| 0.697 \\| 0.376|\n|+ Scale handling|0.798 \\| 0.508|0.740 \\| 0.417|0.760 \\| 0.448|0.878 \\| 0.476|\n|+ Series Stationarization (Ours)|**0.612** \\| **0.338** | **0.613** \\| **0.340**| **0.618** \\| **0.328** | **0.653** \\| **0.355**|\n\n|Weather (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer| 0.395 \\| 0.427 | 0.619 \\| 0.560 | 0.689 \\| 0.594 | 0.926 \\| 0.710 |\n|+ Scale handling|0.248 \\| 0.339|0.334 \\| 0.412|1.157 \\| 0.800|0.969 \\| 0.732|\n|+ Series Stationarization (Ours)|**0.173** \\| **0.223**|**0.245** \\| **0.285**|**0.321** \\| **0.338**|**0.414** \\| **0.410**|", " (2) Apply the Series Stationarization and the De-stationary Attention to ETSformer and FEDformer.\n\nNote that the Non-stationary Transformer can perform as a general framework and consistently promote the performance of various Transformers $\\underline{\\text{Table 4 of the main text}}$.\n\nTo further address the effectiveness of our proposed Series Stationarization and the De-stationary Attention, we apply them with the advanced ETSformer and FEDformer. The experimental results demonstrate that our Non-stationary Transformer can further promote ETSformer and FEDformer.\n\n|Exchange (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|ETSformer|0.085 \\| 0.204| 0.182 \\| 0.303|0.348 \\| 0.428|1.025 \\| 0.774|\n|ETSformer + Ours|**0.083** \\| **0.201**|**0.177** \\| **0.298**|**0.338** \\| **0.420**|**0.878** \\| **0.708**|\n|FEDformer|0.148 \\| 0.278|0.271 \\| 0.380|0.460 \\| 0.500|1.195 \\| 0.841|\n|FEDformer + Ours| **0.127** \\| **0.254**| **0.251** \\| **0.365**| **0.452** \\| **0.497**| **1.168** \\| **0.830**|\n\n|ILI (MSE\\|MAE)|Predict 24|Predict 36|Predict 48|Predict 60|\n|-|-|-|-|-|\n|ETSformer|2.536 \\| 1.021| 2.875 \\| 1.082 | 2.536 \\| **1.004**| 2.529 \\| 1.029|\n|ETSformer + Ours|**2.012** \\| **1.005**|**2.518** \\| **1.011**| **2.516** \\| 1.030| **2.366** \\| **1.022**|\n|FEDformer|3.228 \\| 1.260|2.679 \\| 1.080|2.622 \\| 1.078|2.857 \\| 1.157|\n|FEDformer + Ours|**3.200** \\| **1.160**| **2.455** \\| **0.969**|**2.484** \\| **0.985**| **2.771** \\| **1.069**|\n\n\n|ETTm2 (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|ETSformer|0.189 \\| 0.280| 0.253 \\| 0.319|0.314 \\| 0.357|0.414 \\| 0.413|\n|ETSformer + Ours|**0.187** \\| **0.270**| **0.253** \\| **0.310**|**0.312** \\| **0.349**| **0.409** \\| **0.405**|\n|FEDformer|0.203 \\| 0.287|0.269 \\| 0.328| **0.325** \\| **0.366**|**0.421** \\| **0.415**|\n|FEDformer + Ours|**0.191** \\| **0.272**| **0.263** \\| **0.317**| 0.343 \\| 0.366| 0.450 \\| 0.427|\n\n|Electricity (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|ETSformer|0.187 \\| 0.304|0.199 \\| 0.315|0.212 \\| 0.329|0.233 \\| 0.345|\n|ETSformer + Ours|**0.177** \\| **0.289**| **0.193** \\| **0.303**| **0.212** \\| **0.321**| **0.231** \\| **0.342**|\n|FEDformer|0.193 \\| 0.308|0.201 \\| 0.315|0.214 \\| 0.329|0.246 \\| 0.355|\n|FEDformer + Ours|**0.172** \\| **0.278**|**0.184** \\| **0.288**| **0.205** \\| **0.310**| **0.230** \\| **0.325**|\n\n|Traffic (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|ETSformer|0.616 \\| 0.399| 0.645 \\| 0.427| 0.628 \\| 0.399|0.628 \\| 0.388|\n|ETSformer + Ours|**0.610** \\| **0.370**| **0.614**\\| **0.380**| **0.623** \\| **0.386**| **0.626** \\| **0.384**| \n|FEDformer|0.587 \\| 0.366|0.604 \\| 0.373|0.621 \\| 0.383|0.626 \\| 0.382|\n|FEDformer + Ours|**0.579** \\| **0.348**| **0.599** \\| **0.358**| **0.616** \\| **0.363**| **0.623** \\| **0.380**|\n\n|Weather (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|ETSformer|0.197 \\| 0.281|0.237 \\| 0.312|0.298 \\| 0.353|0.352 \\| 0.388|\n|ETSformer + Ours|**0.169** \\| **0.223**| **0.220**\\| **0.269**| **0.284** \\| **0.317**| **0.344** \\| **0.362**| \n|FEDformer|0.217 \\| 0.296|0.276 \\| 0.336|0.339 \\| 0.380|0.403 \\| 0.428|\n|FEDformer + Ours|**0.187** \\| **0.234**| **0.235** \\| **0.271**| **0.289** \\| **0.308**| **0.359** \\| **0.353**|\n\n\n\n**Q4:** Hyper-parameter selection strategy for the baselines.\n\n(1) Most of the baselines are from the paper of Autoformer. By contacting the authors of Autoformer, we obtain the hyper-parameter selection strategy as follows:\n\n- N-BEATS: Grid search for hidden channel in {$256,512,768$}, number of layers in {$2,3,4,5$}, learning rate in {$5\\times 10^{-5}, 1\\times 10^{-4}, 5\\times 10^{-4},1\\times 10^{-3}$}.\n- LSTNet: Since this paper also experiments on the Traffic, Electricity, and Exchange datasets, the hyper-parameter setting is following the experimental details of its own paper.\n\n(2) The hyper-parameter selection strategy of the new baselines, such as N-HiTs, ETSformer, and FEDformer. Since these methods share the same benchmark, we use their official code on GitHub with three random seeds.\n\nWe have further clarified the hyper-parameter selection strategy in the supplementary materials of the $\\underline{\\text{revised paper}}$.\n", " Weather benchmark in Q2.\n\n|Weather (sMAPE\\|MASE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Transformer|73.607 \\| 9.776|75.917 \\| 9.987|75.185 \\| 13.242|81.749 \\| 12.850|\n|Transformer + Ours|60.462 \\| 5.044| 63.586 \\| 5.894| 64.736 \\| 5.395| 68.112 \\| 5.524|\n|Autoformer|68.726 \\| 10.368| 69.467 \\| 9.461|70.435 \\| 10.666| 73.300 \\| 10.921|\n|Autoformer + Ours|62.997 \\| 6.733| 64.784 \\| 6.394| 66.958 \\| 6.204| 67.936 \\| 4.987|\n|FEDformer|68.915 \\| 11.076| 68.268 \\| 9.736|71.029 \\| 9.820| 76.752 \\| 11.543|\n|FEDformer + Ours|60.699 \\| 4.247|63.033 \\| 4.421|65.047 \\| 4.539| 66.958 \\| 4.765|\n\n**Q3:** Add more advanced transform-based forecasting model for comparison.\n\n(1) Absolute comparison with ETSformer and FEDformer on MSE and MAE.\n\nAs per your request, we add the ETSformer and FEDformer as the baselines in $\\underline{\\text{Table 2 of the main text}}$. \n\nIt is notable that the FEDformer (newly accepted by ICML 2022) and ETSformer (arXiv 2022) have not been officially published during our submission. Our method is comparable to this concurrent work. In addition, our method is a general framework, which can be applied to these advanced methods.\n\n|Exchange (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Ours|0.111 \\| 0.237|0.219 \\| 0.335|0.421 \\| 0.476|1.092 \\| **0.769**|\n|ETSformer|**0.085** \\| **0.204**| **0.182** \\| **0.303**|**0.348** \\| **0.428**|**1.025** \\| 0.774|\n|FEDformer|0.148 \\| 0.278|0.271 \\| 0.380|0.460 \\| 0.500|1.195 \\| 0.841|\n\n|ILI (MSE\\|MAE)|Predict 24|Predict 36|Predict 48|Predict 60|\n|-|-|-|-|-|\n|Ours|**2.294** \\| **0.945**|**1.825** \\| **0.848**|**2.010** \\| **0.900**| **2.178** \\| **0.963**|\n|ETSformer| 2.536 \\| 1.021| 2.875 \\| 1.082 | 2.536 \\| 1.004| 2.529 \\| 1.029|\n|FEDformer|3.228 \\| 1.260|2.679 \\| 1.080|2.622 \\| 1.078|2.857 \\| 1.157|\n\n|ETTm2 (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Ours|0.192 \\| **0.274**| 0.280 \\| 0.339| 0.334 \\| 0.361| 0.417 \\| 0.413|\n|ETSformer|**0.189** \\| 0.280| **0.253** \\| **0.319**|**0.314** \\| **0.357**|**0.414** \\| **0.413**|\n|FEDformer|0.203 \\| 0.287|0.269 \\| 0.328| 0.325 \\| 0.366|0.421 \\| 0.415|\n\n|Electricity (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Ours|**0.169** \\| **0.273**| **0.182** \\| **0.286**| **0.200**\\| **0.304**| **0.222** \\| **0.321**|\n|ETSformer|0.187 \\| 0.304|0.199 \\| 0.315|0.212 \\| 0.329|0.233 \\| 0.345|\n|FEDformer|0.193 \\| 0.308|0.201 \\| 0.315|0.214 \\| 0.329|0.246 \\| 0.355|\n\n|Traffic (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Ours|0.612 \\| **0.338** | 0.613 \\| **0.340**| **0.618** \\| **0.328** | 0.653 \\| **0.355**|\n|ETSformer|0.616 \\| 0.399| 0.645 \\| 0.427| 0.628 \\| 0.399|0.628 \\| 0.388|\n|FEDformer|**0.587** \\| 0.366|**0.604** \\| 0.373|0.621 \\| 0.383|**0.626** \\| 0.382|\n\n|Weather (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Ours|**0.173** \\| **0.223**|0.245 \\| **0.285**|0.321 \\| **0.338**|0.414 \\| 0.410|\n|ETSformer|0.197 \\| 0.281|**0.237** \\|0.312|**0.298** \\| 0.353|**0.352**\\| **0.388**|\n|FEDformer|0.217 \\| 0.296|0.276 \\| 0.336|0.339 \\| 0.380|0.403 \\| 0.428|\n\n(See the next part for the relative promotion.)", " **Q2:** The concern of the experiment setting.\n\n(1) All the methods in our paper are compared under consistent and fair evaluation protocol.\n\nAs the reviewer stated, it is a convention in long sequence forecasting, that is including the zero-score normalization and reporting the MSE and MAE on the zero-score normalized time series. \n\nNote that in the experiments of the Non-stationary Transformer, **in pursuit of a fair comparison with previous methods, the zero-score normalization is also adopted as the preprocessing.** Thus, although Non-stationary Transformer adopts the Series Stationarization, both the prediction and ground truth for evaluation are still in the zero-score normalized time series space, which is the same as the convention setting. \n\nThus, in our paper, all the methods are evaluated based on the same benchmark and protocol, thereby presenting a fair comparison.\n\n(2) Compare the prediction on the original space with the scale-based evaluation metrics.\n\nTo further address the reviewer's concern about the \"coupled effect for the proposed normalization strategy and the normalization in preprocessing steps\", we provide new comprehensive results on the original space. For sMAPE (in the range of $[0,200]$) and MASE, we follow the calculation in N-BEATS [25]. The following results demonstrate that our proposed Non-stationary Transformer can still improve upon the previous methods generally, which reduces 25.09% sMAPE on Transformer, 4.84% on Autoformer, and 5.60% on FEDformer on average. Note that the value of relative promotion can be changed under different metrics.\n\n|Exchange (sMAPE\\|MASE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Transformer|6.594 \\| 22.948| 9.632 \\| 32.386| 11.166 \\| 36.892|14.252 \\| 47.601| \n|Transformer + Ours| 2.934 \\| 9.435| 3.860 \\| 12.692 | 5.650 \\| 18.941|8.509 \\| 29.881|\n|Autoformer| 3.032 \\| 10.076|4.194 \\| 13.845|5.671 \\| 18.744|8.904 \\| 31.192|\n|Autoformer + Ours|3.485 \\| 11.633| 3.948 \\| 13.372| 5.392 \\| 18.100|9.483 \\| 34.474|\n|FEDformer|2.989 \\| 9.818|4.271 \\| 14.048|5.468 \\| 18.205| 8.956 \\| 31.602|\n|FEDformer + Ours|2.857 \\| 9.356|4.107 \\| 13.451|5.578 \\| 18.436| 9.528 \\| 31.596| \n\n|ILI (sMAPE\\|MASE)|Predict 24|Predict 36|Predict 48|Predict 60|\n|-|-|-|-|-|\n|Transformer|52.355 \\| 7.231| 51.878 \\| 7.260| 52.286 \\| 8.171|51.998 \\| 8.912|\n|Transformer + Ours|35.391 \\| 3.114| 30.770 \\| 2.964| 30.485 \\| 3.355|35.354 \\| 3.685|\n|Autoformer|47.399 \\| 3.400|43.689 \\| 3.919|39.585 \\| 3.868| 40.946 \\| 4.208|\n|Autoformer + Ours|46.705 \\| 3.377|43.416 \\| 3.535|38.488 \\| 3.760|39.003 \\| 4.056|\n|FEDformer|46.343 \\| 3.381|37.618 \\| 3.369| 37.906 \\| 3.746|41.113 \\| 4.157|\n|FEDformer + Ours|42.662 \\| 3.322| 34.048 \\| 3.342| 34.716 \\| 3.617| 37.777 \\| 3.980|\n\n|ETTm2 (sMAPE\\|MASE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Transformer|72.888 \\| 6.493| 86.689 \\| 10.162| 95.832 \\| 10.475 |97.860 \\| 14.274|\n|Transformer + Ours|61.897 \\| 4.029| 65.713 \\| 4.714 |70.782 \\| 6.568|76.118 \\| 7.548|\n|Autoformer|62.317 \\| 3.862|65.183 \\| 4.111|67.377 \\| 4.855|73.798 \\| 5.141|\n|Autoformer + Ours|61.434 \\| 3.846| 65.232 \\| 4.037| 67.157 \\| 4.472| 71.554 \\| 5.197|\n|FEDformer|60.669 \\| 3.392| 62.891 \\| 3.910| 66.745 \\| 4.408| 71.449 \\| 5.077|\n|FEDformer + Ours| 60.352 \\| 3.328| 62.087 \\| 3.906|68.259 \\| 4.503| 72.295 \\| 5.133|\n\n|Electricity (sMAPE\\|MASE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Transformer|17.646 \\| 1.185|18.118 \\| 1.318| 17.986 \\| 1.246|18.599 \\| 1.228|\n|Transformer + Ours|14.068 \\| 0.975|14.395 \\| 1.018| 14.891 \\| 1.068| 15.634 \\| 1.092|\n|Autoformer|17.066 \\| 1.135|17.574 \\| 1.216|18.958 \\| 1.400| 19.569 \\| 1.320|\n|Autoformer + Ours|14.852 \\| 1.034| 15.736 \\| 1.109| 16.637 \\| 1.186| 18.184 \\| 1.304|\n|FEDformer|17.273 \\| 1.058| 17.405 \\| 1.109| 18.277 \\| 1.201| 18.479 \\| 1.265|\n|FEDformer + Ours|14.437 \\| 0.999| 14.725 \\| 1.051|15.549 \\| 1.145|16.229 \\| 1.170|\n\n|Traffic(sMAPE\\|MASE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Transformer|33.626 \\| 1.056| 34.033 \\| 1.033 | 33.173 \\| 1.058|33.879 \\| 1.139|\n|Transformer + Ours|32.685 \\| 0.987|33.189 \\| 0.980| 32.441 \\| 0.973| 33.371 \\| 1.031|\n|Autoformer|39.165 \\| 1.139| 39.787 \\| 1.202| 37.978 \\| 1.099| 41.439 \\| 1.234|\n|Autoformer + Ours|36.182 \\| 1.034| 38.019 \\| 1.087| 36.384 \\| 1.052| 37.858 \\| 1.130|\n|FEDformer|37.635 \\| 1.072|37.921 \\| 1.128|38.046 \\| 1.125|38.674 \\| 1.150\n|FEDformer + Ours|34.639 \\| 1.019| 35.206 \\| 1.042|35.712 \\| 1.054|36.678 \\| 1.113\n\n(The weather benchmark is in the next part.)", " Many thanks to Reviewer q3Tk for providing the thorough insightful comments. \n\n**Q1:** The necessity of Series Stationarization.\n\n(1) Literature analysis.\n\nAs we stated in the $\\underline{\\text{lines 29-34 of the main text}}$, the non-stationary time series can affect the prediction in both data and deep learning views.\n\n- \"The non-stationary time series is less predictable\", which is a basic idea of time series analysis.\n- \"Non-stationary time series corresponds to the change of statistical properties and joint distributions over time. It is a fundamental but challenging problem to make deep models generalize well on a varying distribution.\"\n\nThus, **Series Stationarization is well-supported by the common knowledge of time series analysis and deep learning**. Further, since the direct stationarization will lead to the over-stationary problem, we design the Non-stationary Transformer with the independent modules: Series Stationarization and De-stationary Attention. This design can make the model receive the stationarized inputs and avoid the over-stationary outputs simultaneously.\n\n(2) Experimental analysis.\n\nWe have provided an ablation study in $\\underline{\\text{Table 5 of main text}}$ and $\\underline{\\text{Table 6 of supplementary materials}}$, where we can find that the Series Stationarization can benefit the time series forecasting. Especially, with Series Stationarization, the input-96-predict-336 MSE of Reformer changes from 1.549 to 0.613 on ETTm2 and from 1.357 to 0.426 on Exchange.\n\nTo further address your concern, we newly add an ablation study using the raw data for the computation of attention score. It means that we remove the Series Stationarization and only maintain the De-stationary Attention. Here are the results. We can find that Series-stationarization can bring benefits to all benchmarks. For the datasets with stronger non-stationarity, the benefit of Series-stationarization becomes more significant.\n\n|Exchange (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.567 \\| 0.591 | 1.150 \\| 0.825 |1.792 \\| 1.084 | 2.191 \\| 1.159 |\n| + only De-stationary Attention| 0.611 \\| 0.613|1.202 \\| 0.840|1.516 \\| 0.981 | 2.894 \\| 1.377|\n| + Ours|**0.111** \\| **0.237**|**0.219** \\| **0.335**|**0.421** \\| **0.476**|**1.092** \\| **0.769**|\n\n|ILI (MSE\\|MAE)|Predict 24|Predict 36|Predict 48|Predict 60|\n|-|-|-|-|-|\n|Vanilla Transformer|4.748 \\| 1.430|4.671 \\| 1.430|4.994 \\| 1.482|5.041 \\| 1.499|\n| + only De-stationary Attention|4.734 \\| 1.424|4.927 \\| 1.482| 4.996 \\| 1.483|5.184 \\| 1.519|\n| + Ours|**2.294** \\| **0.945**|**1.825** \\| **0.848**|**2.010**\\| **0.900**| **2.178** \\| **0.963**|\n\n|ETTm2 (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.572 \\| 0.552| 1.161 \\| 0.793|1.209 \\| 0.842| 3.061 \\| 1.289|\n| + only De-stationary Attention|0.304 \\| 0.406| 0.820 \\| 0.652| 1.406 \\| 0.883| 2.858 \\| 1.108|\n| + Ours|**0.192** \\| **0.274**| **0.280** \\| **0.339**|**0.334** \\| **0.361**| **0.417** \\| **0.413**|\n\n|Electricity (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-| \n|Vanilla Transformer|0.260 \\| 0.358| 0.266 \\| 0.367| 0.280 \\| 0.375|0.302 \\| 0.386|\n| + only De-stationary Attention|0.253 \\| 0.351|0.257 \\| 0.358| 0.270 \\| 0.365 |0.295 \\| 0.380|\n| + Ours|**0.169** \\| **0.273**| **0.182**\\| **0.286**| **0.200** \\| **0.304**| **0.222** \\| **0.321**|\n\n|Traffic (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.647 \\| 0.357|0.649 \\| 0.356|0.667 \\| 0.364| 0.697 \\| 0.376|\n| + only De-stationary Attention|0.650 \\| 0.358| 0.655 \\| 0.358| 0.656 \\| 0.355|0.681 \\| 0.366|\n| + Ours|**0.612** \\| **0.338** | **0.613** \\| **0.340**| **0.618** \\| **0.328** | **0.653** \\| **0.355**|\n\n|Weather (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer| 0.395 \\| 0.427 | 0.619 \\| 0.560 | 0.689 \\| 0.594 | 0.926 \\| 0.710 |\n| + only De-stationary Attention| 0.296 \\| 0.364| 0.480 \\| 0.464 | 0.581 \\| 0.519 | 0.795 \\| 0.642|\n| + Ours|**0.173** \\| **0.223**|**0.245** \\| **0.285**|**0.321** \\| **0.338**|**0.414** \\| **0.410**|", " **Q5:** Compare with other benchmarks\n\nAs shown in the $\\underline{\\text{Table 2 and Table 3 of main text}}$, except Transformers, we have compared with various baselines, including the linear models: N-Beats (2019) and N-HiTs (2022), the LSTM-based baselines: LSTNet (2018) and the Classical method: ARIMA.\n\nAs per your request, we also include the GRU and LSSL as our baselines. Here are the results. The proposed Non-stationary Transformer still achieves the best performance in all benchmarks. Notably, LSSL can give a good performance on the Weather dataset with the strongest stationarity, but fails in other datasets, especially the datasets with strong non-stationarity.\n\n|Exchange (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|GRU|1.453 \\| 1.049| 1.846 \\| 1.179| 2.136 \\| 1.231| 2.984 \\| 1.427|\n|LSSL|0.395 \\| 0.474|0.776 \\| 0.698| 1.029 \\| 0.797|2.283 \\| 1.222|\n|Ours|**0.111** \\| **0.237**|**0.219** \\| **0.335**|**0.421** \\| **0.476**|**1.092** \\| **0.769**|\n\n|ILI (MSE\\|MAE)|Predict 24|Predict 36|Predict 48|Predict 60|\n|-|-|-|-|-|\n|GRU|5.914 \\| 1.734| 6.631 \\| 1.845| 6.736 \\| 1.857| 6.870 \\| 1.879|\n|LSSL|4.381 \\| 1.425|4.442 \\| 1.416|4.559 \\| 1.443|4.651 \\| 1.474|\n|Ours|**2.294** \\| **0.945**|**1.825** \\| **0.848**|**2.010** \\| **0.900**| **2.178** \\| **0.963**|\n\n|ETTm2 (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|GRU|2.041 \\| 1.073 | 2.249 \\| 1.112| 2.568 \\| 1.238| 2.720 \\| 1.287|\n|LSSL|0.243 \\| 0.342|0.392 \\| 0.448|0.932 \\| 0.724|1.372 \\| 0.879|\n|Ours|**0.192** \\| **0.274**| **0.280** \\| **0.339**| **0.334** \\| **0.361**| **0.417** \\| **0.413**|\n\n|Electricity (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|GRU|0.375 \\| 0.437|0.442 \\| 0.473| 0.439 \\| 0.473|0.980 \\| 0.814|\n|LSSL|0.300 \\| 0.392|0.297 \\| 0.390|0.317 \\| 0.403|0.338 \\| 0.417|\n|Ours|**0.169** \\| **0.273**| **0.182** \\| **0.286**| **0.200** \\| **0.304**| **0.222** \\| **0.321**|\n\n|Traffic (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|GRU|0.843 \\| 0.453|0.847 \\| 0.453|0.853 \\| 0.455| 1.500 \\| 0.805|\n|LSSL|0.798 \\| 0.436|0.849 \\| 0.481|0.828 \\| 0.476|0.854 \\| 0.489|\n|Ours|**0.612** \\| **0.338** | **0.613** \\| **0.340**| **0.618** \\| **0.328** | **0.653** \\| **0.355**|\n\n|Weather (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|GRU|0.369 \\| 0.406| 0.416 \\| 0.435|0.455 \\| 0.454| 0.535 \\| 0.520|\n|LSSL|0.174 \\| 0.252|**0.238** \\| 0.313|**0.287** \\| 0.355|**0.384** \\| 0.415|\n|Ours|**0.173** \\| **0.223**|0.245 \\| **0.285**|0.321 \\| **0.338**|0.414 \\| **0.410**|\n\n\n**Q6:** The writing issues.\n\n- Simplify the writing: Thanks for your valuable suggestions. We have rephrased all the long sentences and overcomplicated words. All the changes are included in the updated $\\underline{\\text{revised paper}}$.\n- The description of \"credited to their stacked structure\": In the original paper, the full description is \"credited to their stacked structure and the capability of Self-Attention\", which means that the Self-Attention can capture the dependencies from deep multi-level features. To eliminate the misunderstanding, we have rephrased this to \"credited to their stacked structure and the capability of Self-Attention, Transformers can naturally capture the temporal dependencies from deep multi-level features\" in the $\\underline{\\text{revised paper}}$.", " **Q4:** Why does De-stationary Attention not just operate on non-stationarized features directly?\n\n(1) Literature analysis.\n\nNote that the non-stationarized features can only be obtained when the input is not stationarized. But the non-stationary time series is hard for forecasting. As we stated in the $\\underline{\\text{lines 29-34 of the main text}}$, the non-stationary time series can affect the prediction in both data and deep learning views.\n\n- \"The non-stationary time series is less predictable.\"\n- \"Non-stationary time series corresponds to the change of statistical properties and joint distributions over time. It is a fundamental but challenging problem to make deep models generalize well on a varying distribution.\"\n\nBased on the above considerations, we propose Non-stationary Transformer with the interdependent Series Stationarization for improving the series predictability and De-stationary Attention for recovering the nonstationary information.\n\n(2) Experimental results.\n\nThe benefits of Series Stationarization and De-stationary Attention have been verified in the ablation study ($\\underline{\\text{Table 5 of main text}}$). As per your request, we newly add an ablation study by removing the Series Stationarization and only maintaining the De-stationary Attention. Here are the results. We have the following two observations: \n\n- \"Only De-stationary Attention\" can surpass the Vanilla Transformer in datasets with stronger stationarity, such as ETTm2, Electricity, Traffic, and Weather.\n- Series-stationarization can bring benefits to all benchmarks. For the datasets with stronger non-stationarity, the benefits of Series-stationarization can be more significant.\n\nIn summary, Series Stationarization and De-stationary Attention work interdependently. Both designs are necessary.\n\n|Exchange (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.567 \\| 0.591 | 1.150 \\| 0.825 |1.792 \\| 1.084 | 2.191 \\| 1.159 |\n| + only De-stationary Attention| 0.611 \\| 0.613|1.202 \\| 0.840|1.516 \\| 0.981 | 2.894 \\| 1.377|\n| + Ours|**0.111** \\| **0.237**|**0.219** \\| **0.335**|**0.421** \\| **0.476**|**1.092** \\| **0.769**|\n\n|ILI (MSE\\|MAE)|Predict 24|Predict 36|Predict 48|Predict 60|\n|-|-|-|-|-|\n|Vanilla Transformer|4.748 \\| 1.430|4.671 \\| 1.430|4.994 \\| 1.482|5.041 \\| 1.499|\n| + only De-stationary Attention|4.734 \\| 1.424|4.927 \\| 1.482| 4.996 \\| 1.483|5.184 \\| 1.519|\n| + Ours|**2.294** \\| **0.945**|**1.825** \\| **0.848**|**2.010** \\| **0.900**| **2.178** \\| **0.963**|\n\n|ETTm2 (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.572 \\| 0.552| 1.161 \\| 0.793|1.209 \\| 0.842| 3.061 \\| 1.289|\n| + only De-stationary Attention|0.304 \\| 0.406| 0.820 \\| 0.652| 1.406 \\| 0.883| 2.858 \\| 1.108|\n| + Ours|**0.192** \\| **0.274**| **0.280** \\| **0.339**| **0.334** \\| **0.361**| **0.417** \\| **0.413**|\n\n|Electricity (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-| \n|Vanilla Transformer|0.260 \\| 0.358| 0.266 \\| 0.367| 0.280 \\| 0.375|0.302 \\| 0.386|\n| + only De-stationary Attention|0.253 \\| 0.351|0.257 \\| 0.358| 0.270 \\| 0.365 |0.295 \\| 0.380|\n| + Ours|**0.169** \\| **0.273**| **0.182** \\| **0.286**| **0.200** \\| **0.304**| **0.222** \\| **0.321**|\n\n|Traffic (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer|0.647 \\| 0.357|0.649 \\| 0.356|0.667 \\| 0.364| 0.697 \\| 0.376|\n| + only De-stationary Attention|0.650 \\| 0.358| 0.655 \\| 0.358| 0.656 \\| 0.355|0.681 \\| 0.366|\n| + Ours|**0.612** \\| **0.338** | **0.613** \\| **0.340**| **0.618** \\| **0.328** | **0.653** \\| **0.355**|\n\n|Weather (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Vanilla Transformer| 0.395 \\| 0.427 | 0.619 \\| 0.560 | 0.689 \\| 0.594 | 0.926 \\| 0.710 |\n| + only De-stationary Attention| 0.296 \\| 0.364| 0.480 \\| 0.464 | 0.581 \\| 0.519 | 0.795 \\| 0.642|\n| + Ours|**0.173** \\| **0.223**|**0.245** \\| **0.285**|**0.321** \\| **0.338**|**0.414** \\| **0.410**|\n", " **Q3:** Other ways to reincorporate the non-stationary information.\n\nTo our best knowledge, this is the first work that explores the co-design of stationarization and de-stationarization. Thus from previous papers, we cannot obtain other ideas for other designs to \"reincorporate the non-stationary information\".\n\n- It is notable that by specifying the over-stationarization problem as the less distinguishable attention problem, we have narrowed down our design space into the attention calculation mechanism. Some other methods for the attention calculation of our model are included in the $\\underline{\\text{Table 9 of supplementary materials}}$, such as only $\\tau$ and only $\\mathbf{\\Delta}$.\n\n- To further address the review's concern, we also conduct an experiment by reincorporating $\\mu$ and $\\sigma$ into the feed-forward layer as the reviewer suggested. But since the effect of stationarization on the feed-forward layer is under-explored, this \"feed-forward\" reincorporation design may not be as well-motivated as our De-stationary Attention. In most cases (83%) of the new experimental results (see table below), our proposed De-stationary Attention achieves better performance. Hence it is a more optimal design with theoretical support.\n- Our literature survey contradicts the comment that \"there are lots of ways we could add information back in\". It will be very helpful if the reviewer could give some citations/references for other possible designs.\n\n|Exchange (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Series Stationarization|0.136 \\| 0.258|0.239 \\| 0.348|0.425 \\| 0.479|1.475 \\| 0.865|\n|+ Reincorporation on Feed Forward|0.116 \\| 0.243|0.280 \\| 0.383|**0.371** \\| **0.452**|**0.634** \\| **0.604**|\n| + De-stationary Attention (Ours)|**0.111** \\| **0.237**|**0.219** \\| **0.335**|0.421 \\| 0.476|1.092 \\| 0.769|\n\n|ILI (MSE\\|MAE)|Predict 24|Predict 36|Predict 48|Predict 60|\n|-|-|-|-|-|\n|Series Stationarization|2.573 \\| 0.980 | 1.955 \\| 0.870|2.057 \\| 0.902|2.238 \\| 0.982|\n|+ Reincorporation on Feed Forward|2.404 \\| 0.985|2.585 \\| 0.983|2.496 \\| 0.991 | 2.667 \\| 1.059|\n|+ De-stationary Attention (Ours)|**2.294** \\| **0.945**|**1.825** \\| **0.848**|**2.010** \\| **0.900**| **2.178** \\| **0.963**|\n\n|ETTm2 (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Series Stationarization|0.253 \\| 0.311| 0.453 \\| 0.404|0.546 \\| 0.461|0.593 \\| 0.489|\n|+ Reincorporation on Feed Forward|0.275 \\| 0.329| 0.406 \\| 0.403|0.502 \\| 0.465|0.694 \\| 0.575|\n|+ De-stationary Attention (Ours)|**0.192** \\| **0.274**| **0.280** \\| **0.339**| **0.334** \\| **0.361**| **0.417** \\| **0.413**|\n\n|Electricity (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Series Stationarization|0.171 \\| 0.275| 0.192 \\| 0.296| 0.208 \\| 0.306| **0.216** \\| **0.315**|\n|+ Reincorporation on Feed Forward| 0.170 \\| 0.274| 0.188 \\| 0.293| 0.206 \\| 0.309|0.223 \\| 0.323|\n|+ De-stationary Attention (Ours)|**0.169** \\| **0.273**| **0.182** \\| **0.286**| **0.200** \\| **0.304**| 0.222 \\| 0.321|\n\n|Traffic (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Series Stationarization|0.614 \\| 0.337|0.637 \\| 0.351| 0.653 \\| 0.359| 0.661 \\| 0.360|\n|+ Reincorporation on Feed Forward|**0.605** \\| **0.333**| 0.617 \\| 0.342| 0.635 \\| 0.349| **0.649** \\| **0.351**|\n|+ De-stationary Attention (Ours)|0.612 \\| 0.338 | **0.613** \\| **0.340**| **0.618** \\| **0.328** | 0.653 \\| 0.355|\n\n|Weather (MSE\\|MAE)|Predict 96|Predict 192|Predict 336|Predict 720|\n|-|-|-|-|-|\n|Series Stationarization|0.175 \\| 0.225| 0.273 \\| 0.297| 0.333 \\| **0.325**| 0.436 \\| 0.420|\n|+ Reincorporation on Feed Forward|0.178 \\| 0.226|0.256 \\| 0.295|0.338 \\| 0.351 |0.417 \\| 0.412|\n|+ De-stationary Attention (Ours)|**0.173** \\| **0.223**|**0.245** \\| **0.285**|**0.321** \\| 0.338|**0.414** \\| **0.410**|\n", " We would like to sincerely thank Reviewer PhZA for providing the insightful review.\n\n**Q1:** Reclarify the contribution of the \"over-stationarization\" problem.\n\nThe reviewer mentioned that \"Stationarisation losing information and thus potential predictive power is intuitively obvious.\" We agree with this argument but also would like to highlight the status of the literature:\n\n- It is obvious that \"stationarization loses information\" but how does stationarization negatively influences model behaviors is more important for algorithm design and improvement. To our best knowledge, previous methods focus mainly on how to stationarize time series **without elaboration and mitigation of its negative effect**. \n- In this paper, we delve for the first time into **the concrete effect of \"stationarization loses information\" in Transformers**. We solve this problem by designing the De-stationary Attention in Transformers, which is new for time series forecasting.\n\nConcretely, with insights analysis, we find out that a direct stationarization will cause **less distinguishable attention (temporal dependencies)** among different time series, which is defined as the over-stationarization problem ($\\underline{\\text{lines 40-42 of the main text}}$). We further clarify the following three items.\n\n- We focus on time series forecasting, in which the learned temporal dependencies (attention in Transformers) are essential to the forecasting performance. The over-stationarization problem causing less distinguishable attention is closely related to time series forecasting with Transformers.\n- Our found over-stationarization problem refers to a concrete situation of the degenerated model-learned attentions in Transformers. This finding is well-supported by the visualization in $\\underline{\\text{Figure 1 of the main text}}$ and the statistics in $\\underline{\\text{Figure 4 of the main text}}$. \n- The particular form of over-stationarization specifies our design space to be avoiding the less distinguishable attention in Transformers. Mitigating the over-stationarization in this specific design space leads us to the Non-stationary Transformer.\n\n\n**Q2:** Why incorporate $\\mu$ and $\\sigma$ this way? Clarify the motivation of De-stationary Attention design.\n\nAs clarified in **Q1**, our design is based on the findings of the over-stationarization problem in Transformers. The particular design is also directly motivated by the derivation of the vanilla attention (plain model) over non-stationary time series. \n\nFor a clear clarification, we sum up the motivations as the following pipeline: \n\n| Motivation | Design |\n| ------------------------------------------------------------ | ------------------------------------------------------------ |\n| Stationarization is important to time series forecasting ($\\underline{\\text{lines 28-36 of the main text}}$). | We adopt the Series Stationarization to enhance the stationarity of the input series. |\n| (1) Directly stationarization will cause less distinguishable attention in Transformers (over-stationarization problem). (2) The attention corresponds to the learned temporal dependencies, and therefore the less distinguishable attention will affect the forecasting performance. | **We focus on the attention calculation mechanism and attempt to avoid the less distinguishable attention**, namely avoiding the over-stationarization problem ($\\underline{\\text{lines 40-42 of the main text}}$). |\n| (1) Transformer can discover the particular temporal dependencies from raw series. (2) The input series is stationarized now. | It is a natural and direct way to **approximate the particular attention learned by Transformer without stationarization** ($\\underline{\\text{line 147 of the main text}}$). |\n| The analysis and derivation of the plain model in $\\underline{\\text{Section 1 of supplementary materials}}$. | We can reincorporate $\\mu$ and $\\sigma$ to the less distinguishable attention map $\\text{Softmax}(\\frac{\\mathbf{Q}^\\prime{\\mathbf{K}^\\prime}^\\top}{\\sqrt{d_k}})$ as $\\underline{\\text{Equation 6 of the main text}}$ to **approximate the desired attention** $\\text{Softmax}(\\frac{\\mathbf{Q}\\mathbf{K}^\\top}{\\sqrt{d_k}})$ learned from raw data. |\n\nMotivated by the above insights and derivations, we design the De-stationary Attention as a direct complement of the Series Stationarization to avoid the over-stationarization problem.", " The authors note that the stationarisation of time series removes information from the time series that can be used to aid in prediction. They call this problem the \"over stationarisation of time series\". Given that we want stationarity to improve model performance, but do not want to lose the information from the stationarisation transform, motivates the \"Non-stationary transformer\" introduced in the paper. \n\nThe model consists of a stationarisation component, and an additional component to reincorporate the stationarised information back into the time series. \n\nThey key component introduced is the De-stationary attention module. This transforms the dat a It is certainly important to reincorporate the information removed by the stationarity transform into the prediction model. The proposed method introduces some method of doing this and demonstrates that it improves performance accross a wide range of benchmarks.\n\nStationarisation losing information and thus potential predictive power is intuitively obvious. The novelty here (to me at least) is the destationary attention mechanism. This is another attention block that has some scaling taking into account non-stationary information. \n\nIt's not clear to me why it has to be done in this way. Why can I not just incorporate mu sigma into the MLP applied to the output of the self attention? My biggest issue with the paper is the motivation behind it. Stationarity loses information - fine. Adding information back in some way should improve performance - fine. This is known and not novel. We add some additional transformer to the existing transformer that accounts for some of this information - fine, but I'm not sure why I should be so interested in this architecture? There are lots of ways we could add information back in, why should I choose this one?\n\nThe experiments do not compare other mechanisms that incorporate such information. I havent been given a reason to be particularly excited about this mechanism over some other method of sticking in mu and sigma. Is there some motivation behind this mechanism that I am missing?\n\nIt would be nice if you had compared with non transformer benchmarks, e.g. LSSL/GRU/\n\n\"We refine that the predictive capability of non-stationary series is essential in real-world forecasting. By detailed analysis, we find out that current stationarization approaches will lead to the over-stationarization problem, limiting the predictive capability of Transformers.\" - From what I can tell, this 'refinement' is a single image showing different attention weights in different scenarios. I dont think there is enough done on this point in this paper to define this as a contribution. \n\n\nThe writing is not great, I found many parts quite difficult to read. A couple of examples of difficult-to-parse sentences:\n\n\"However, non-stationarity is the constitutional property of real-world time series that can be entangled with essential temporal dependencies for forecasting\"\n\n\"Considering the deep model scenario under the real case, de-stationary factors should be disentangled from the statistics of unstationarized x, Q and K.\"\n\nIn general I think a lot of the wording is overcomplicated and could be significantly simplified. \n\nYou say transformers perform well \"credited to their stacked structure\". Most DL models for time series are stacked in some way. I dont really see how this is a plus for transformers. Why does the De-stationary attention module not just operate on the existing, non-stationarised features directly? Yes", " This paper study the non-stationary problem for transformer based forecasting model. They find that the nomarlization step in pre-process will lead transformers to generate indistinguishable temporal attention, which harms the prediction capability. To address this issue, the authors propose the de-stationary attention mechanism which considers the mean and variance statistics when computing the attention score. Empirical studies show consistent improvements over different type of transformers. Pros:\n1. The paper is easy to read and the structure is well organized. \n2. The proposed strategy can be easily integrate with different transformer backbone.\n3. Empirical results show consistent improvement.\n\nCons:\n1. The proposed method is not very convincing. I appreciate the author for presenting the analysis for vanilla self-attention in section 3.2. However, I am confused for the motivation of using normalization layer. If the normalized data lead to the vanishment of so-called non-stationary information, why not remove the normalization before embedding layer and use the raw data for computation of attention score?\n2. My major concern is the experiment setting is unfair. The standard preprocess protocol for long sequence forecasting includes the zero-mean normalized, but the reported RMSE, MAE has not inverted the forecasting results. It is unclear the coupled effect for the proposed normalization strategy and the normalization in preprocessing steps. It would be more convincing if the authors could compare the prediction on the original space with the scale-based evaluation metric, like sMAPE, MASE, etc.\n3. It would be better to incorporate more advanced transformed based forecasting model, like ETSformer, FEDformer ( since these models has beaten autoformer with a margin), and justify the new normalization can also bring performance boost.\n4. There exist some open question for the experiments on long sequence forecasting setting. It would be more convincing if the author can present the hyper-parameter selection strategy for the baselines to guarantee the fair comparison, for example, the hyper-parameters for N-Beats and LSTNet. see above see above", " This paper proposes Non-stationary Transformers as a generic framework to tackle over-stationarization problem. It includes Series Stationarization and De-stationary Attention module, where Series Stationarization converts raw time series into more stationary ones for better predictability and De-stationary Attention is devised to recover the intrinsic non-stationary information in raw time series during self-station stage. Results show that Non-stationary Transformers are generally applicable with various Transformers and significantly improve results compared to the existing state-of-the-art. **Strengths**\n* The motivation is clear and the writing is easy to follow. The De-stationary Attention module is quite simple and clearly deduced in Equation 5, making its implementation straightforward and easy to understand.\n* The results are quite impressive and simple techniques can significantly improve over various state-of-the-art baselines.\n\n**Weaknesses**\n* The proposed Series Stationarization technique share lots of similarity with section 3.3 Scale handling in [1]. The authors need clearly discuss their difference.\n* $\\tau$ and $\\triangle$ are learned in the paper. However, if $\\tau$ and $\\triangle$ are used to approximate the variance and mean term, authors need to show if directly utilizing pre-computed statistics will work. Such an ablation study will make this work more solid.\n\nReference\n[1] David Salinas, Valentin Flunkert, Jan Gasthaus. DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks. What is the standard deviation of results listed in the paper? It will be good to know how statistically significant the proposed method is. Figure 4 requires readers to re-read Table 1 \"ADF Test Statistics\" column. Authors can consider adding information in figure 4 to clarify that datasets are sorted according to ADF Test Statistics so that readers are easier to follow.", " This paper focuses on forecasting and introduces a new method for scaling attention.\n* The input raw series is chunk-normalized to yield the input of the backbone transformer model\n* the chunk-wise scaling parameters are processed through MLPs to scale the rows of the attention matrix.\n\nThis procedure is showed to consistently improve performance of transformer-based forecasting models. Strengths\n* The proposed procedure is quite simple to implement\n* In my view, introducing this attention scaling is actually a nice and elegant thing to do.\n* The proposed scaled-attention mechanism obviously helps a lot in practice to improve performance\n\nWeaknesses\n* overall, I would say that the theoretical part is not very rigorous.\n* The paper focuses its whole discussion on the concept of \"stationarity\", which is a pretty well defined word with a strong historical background. basically, a time series x is stationary if time shifts do not change its distribution, but the use that is made here is. Here, the authors call a time series stationary if all (vector) samples are normalized. This is not related.\n\n-----\nEDIT after reviews and discussions\n\nI believe the authors did an amazing job during this review round and I actually think this paper has some big potential impact, even if I still have many things to say regarding its rigour, but I think this is not a fundamental flow that should prevent the paper from being published. * You should not use the words \"stationary\", \"stationarity\", \"de-stationary\" etc unless this is actually what you mean. \n* From a broad perspective, I see your proposed method as applying some chunk-wise transform, say x', \\theta = F(x), that returns some modified version for a chunk along with some parameters, and then using some MLP1(\\theta) and MLP2(\\theta) for scaling the attention matrix. At the output, you apply F^{-1}(out, \\theta). I genuinely think this is a great idea, and I think it is ok to motivate this by the following story:\n - Having all chunks have the same first two moments (mean and variance) is a first step in enforcing them to have the same distribution and this is what we do with our series normalization. In this case, our chunkwise transform is a simple normalization, and \\theta={\\mu, \\sigma}.\n - Imagining that we do applied just some linear transform on the input, here's what the attention matrix would be like. So we propose some scaling scheme for the attention inspired by this, that looks like.....\n- at the output, we can transform back.\n The advantage of this way of writing things is that you leave room for further research (notably more sophisticated chunkwise methods) and you don't wrongly pretend that normalizing means enforcing stationarity... because it's not the case.\n\nI think that this paper really brings great things on the table, but that it should undergo some strong modifications to be ok. The good news in my view is that I am not suggesting any change to the actual model, which is good, but just to the story that is told regarding the theory, so that I believe this is feasible (although it requires some heavy work).\n\n\nBelow, please find a list of comments on the go.\nTitle\n* I personally don't like all these \"rethinking *\" titles, that sound very pretentious in my view, somehow suggesting that things were not correctly thought about before and that everything will change from now on. Please note that your view of stationarity is not rigorous, adding a bit to my frustration.\n\nIntroduction:\n* \"From the aspect of data\": awkward. Furthermore, this statement about predictability is not very well motivated in my opinion.\n* \"From the view of deep learning\": why would that be limited to deep learning ?\n* \"a hot topic\": this is an exagerated statement, provided you give 3 references, among which one is almost 15 years old.\n* Figure 1 is not clear enough in my opinion. I don't really understand what the input data to the transformer is: are these the I, II, III chunks ? What is that the second column ? Just zooms for the data ? And finally, what are these \"attention matrices\" ? what's the number of rows, columns. I suspect some interpolation for these images ? Are you using some `interpolation='nearest'` for your plot ?\n\nRelated work\n* \"which is the essential property\": an essential\n* \"is the key to predictability\": a key. I am not sure this statement is supported by such generic references.\n* I am a bit annoyed by the fact that you constantly talk of stationarity and non-stationarity, but you don't define the terms.\n\n\nNon-stationary Transformers\n* \"the key to time series predictability\": again, too strong. \"an important element\" ?\n* The reference to figure 1 is not clear. As mentioned already I don't really see what Figure 1 brings on the table exactly, maybe just because it is poorly explained and I don't understand it.\n* \"To deal with the dilemna [...] simultaneously\": you wrote that already\n* although this is classical to use \"T\" for transposition, please note that using \\top instead gives a better rendering.\n\n* I really must protest against your statement that a simple normalization is enough to \"transform it into standard Gaussian distribution\". I don't see any connection between having x be centered and unit variance and x being N(0,1). Of course, the first two moments are matching, but this is not at all sufficient to make x Gaussian. You should remove this statement and simply mention that you are centering and normalizing it.\n* I understand the use of \"Hadamard product\", but the fact is that it can be confusing to some readers. If you like, you could be using a statement like \"\\frac{a}{b} and a\\circ b are element-wise division and product, respectively\".\n* I don't understand how these \\mu_x and \\sigma_x vectors are computed. From a single S-dimensional chunk, how do you end up computing a S-dimensional mean and a S-dimensional variance statistic ? Checking out the followup, it turns out you are simply normalizing each chunk, so that \\mu_x and \\sigma_x are just scalars. You should not be introducing this useless notation.\n* Note that Normalization module eliminates the [...] statistics\": I don't see at all how this statement is supported. The only thing you enforce is that every chunk is normalized. You may not at all write that this leads to all of them having the same \"statistics\". As you know, you can't seriously write that mean and standard deviation are the only statistics for a time series... What about all the auto-correlation structure, higher-order statistics, etc. I am fine with your normalization, but please don't write that it results in chunks having the same distribution ! You are simply matching the first two moments.\n\n* You didn't explain how the \"sliding window\" is achieved: is there some overlap between your chunks ? Or do you obtain your chunks with a simple folding operation (window size=hop size)\n* I don't see the point in using uppercase \"Series Stationarization\" if you're not going to use the (unfortunate) acronym SS. Maybe just use lower case then.\n* \"the base models will receive stationarized inputs\": again, I feel uncomfortable with accepting the statement that a series of normalized chunks is a stationary time series. Just imagining a dataset with outliers, your procedure will obviously change the content of a chunk containing the outlier in a undesired way, so that future research might focus on some other transformation than mean-variance normalization, like quantiles or such. Don't misread me: the method is nice, but in my view you are claiming something that is not exactly there.\n* \"for an instance\": for instance\n* again, the reference to figure 1 is unclear to me, because I didn't get what you want exactly to be conveying with this figure\n* \"following the same distribution\": it is not because they are both centered and unit variance that two vectors have the same distribution !\n* \"to tackle the trivial attention\": awkward. to tackle this over-stationarization problem ?\n* \"To simplify the analysis [...] input series\": this assumption is completely unreasonable and it is useless in my view to rely on it seriously for anything. This said, I understand that assuming f to be linear allows you to derive the *general form* of your \"de-stationary attention\", as what would be happening in the linear case. The fact is that you are actually not using this assumption in the followup since your attention module don't use these \\mu and \\sigma as such, but instead through some MLPs. I think that the story would be more readable if you introduced it as you do in the linear case, but then simply propose this attention scaling procedure ase a way to reinject the normalization parameters back into attention in a way that generalizes the linear case. This would read much better to me.\n* \"Since it is a convention to conduct [...] is reduced to a scalar\". This sentence makes no sense to me. Just take \\mu_x and \\sigma_x as scalars right from the start and be good with this.\n* As stated above, it is not necessary to write that your \\tau and \\Delta are there to \"approximate \\sigma_x^2 and K\\mu_q\" (you know them already !) but rather that you introduce them as scaling factors in analogy to what happens in the linear case. Don't forget that your derivations assume a linearity that is not there.\n\n\nExperiments\n* What do you call \"stationarity\" and \"relative stationarity\" for the \"over-stationarization problem\" section ? This is not defined.\n\n\nConclusion\n* \"The impressive generality and performance of the proposed framework\": please change this pretentious statement.\n The authors are a bit pretentious at times, but other than that it is all right" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "ZGnFhvSMhFi", "jdNFG3_4HE-", "dUx7DRHqdLE", "M1uwH5kVmmD", "khBkzzwZWh", "nips_2022_ucNDIDRNjjv", "XqzOc4ax-o4", "cRsEi9uUaEf", "KgJDHX3UdH4", "0mKB71pMTsN", "dZWw2cXfdNA", "Z0ZJu4xocbH", "_dsLH_Pv5E3", "M1uwH5kVmmD", "brW6RJvSmQ", "evEKk77-rUm", "iuRCL_LBw0tm", "khBkzzwZWh", "nips_2022_ucNDIDRNjjv", "nips_2022_ucNDIDRNjjv", "nips_2022_ucNDIDRNjjv", "nips_2022_ucNDIDRNjjv" ]
nips_2022_lme1MKnSMb
VCT: A Video Compression Transformer
We show how transformers can be used to vastly simplify neural video compression. Previous methods have been relying on an increasing number of architectural biases and priors, including motion prediction and warping operations, resulting in complex models. Instead, we independently map input frames to representations and use a transformer to model their dependencies, letting it predict the distribution of future representations given the past. The resulting video compression transformer outperforms previous methods on standard video compression data sets. Experiments on synthetic data show that our model learns to handle complex motion patterns such as panning, blurring and fading purely from data. Our approach is easy to implement, and we release code to facilitate future research.
Accept
This paper uses transformers for video compression, using less components compared to competing methods. Video compression is an important application in machine learning, and the use of transformers is well-timed w.r.t. generally strong interest in the architecture. There were some concerns over clarity of presentation, as well as issues with some of the experimentation, which the reviewers and authors seem to mostly seem to have been able to work out. There is one exception in one reviewer: the authors seem to have worked very hard to address all of this reviewer's concerns, but said reviewer did not adjust their score at all. In addition there was disagreement on some of that reviewers more prominent points. So I recommend acceptance of this paper. Overall, w64c stood out as an exceptional reviewer, as they engaged beyond their own review and actively both with the authors and the other reviewers. xvyj brought up some good points, but they were almost tyrannical towards the authors and never gave back in terms of score when the authors clearly satisfied their concerns.
train
[ "t4EUBBy1GC", "P1-Ke2oF_t", "sz2vuy9Oajh", "G1gE1sye-uB", "0zyLYrYVqRO", "_neza8ITVhr", "owirTA2Gcsu", "QQt7v2SvPG", "EGoljyKuZlU", "B9XQLZ0law3", "HOTR4h-9zqB", "vBTejJZMCGM", "TclkYX_W1VY", "iWC9e00irz", "zDlwNOLegxN", "5j2HRbzXDMZ", "mT6BjJMtohP", "4lVFr7lRD7B", "Yeu-jPG5XTy", "atMi363ppvn", "Y1zYHnDTEi", "wA3dKxtJEvU", "7an1olKB5w_", "r_6zlwehFpe", "M2HxUoy6CLC" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have updated the manuscript with the modified figures and addressed the typos. Let us know if Fig. 1 in particular is an improvement from your point of view.", " (See response above)", " Thank the authors for providing extra results on VCT. They are an addition to the paper. I decide to keep my current evaluation.\n\nThe ablation study on training data and the comparison with Liu et al. [20] are essential for rigorous experimental results. Based on the current ablation, some more ablation studies would help: (these are time-consuming experiments so they are just suggestions for future revision)\n* **Ablation on training data**: consider adding DVC or DVC Pro as another baseline -- what would other optical flow-based methods perform in this figure?\n* **Ablation on comparison with Liu et al.**: I conjecture that the gap between \"Preliminary CNN baseline\" and VCT is still non-trivial. I would recommend some more experiments on this.\n\nFinally, I do not agree with some of reviewer xvyj's comments. I acknowledge the concerns about the novelty of this work since there were some related ideas in the literature. I don't think we should completely disregard some novel modules proposed in this work. Some of these concerns (e.g., novelty and comparison with other works) don't quite fit into the criteria of a clear reject (3).", " Thank you for the clarifications which addressed my concerns and I believe my current rating is consistent with the updated version. The proposed method has a reasonable inference speed. It uses a larger model compared to baselines and I believe this should be discussed in the limitations, but I don't think it under-weighs the contribution of this paper. The paper presents a cleanly designed model for video compression with favorable empirical performance and clearly presented ablation studies. It is a good fit for the venue in my opinion. ", " - It's on MCL-JCV, we updated the supplementary.\n- VCT: 114M, SSF: 27M parameters\n- We updated the supplementary to use a log scale, thanks for pointing this out.", " * What's the \"evaluation loss\" in A.3/Figure 9 calculated on? Training data, validation data, or UVC/MCL-JCV?\n* What's the model size of SSF and VCT in terms of parameters?\n* Please also fix the scale of x-axis in the future version -- log scale or uniform scale.", " We have updated the main manuscript with blue text highlighting new lines.\n\n- To `7MPC`: We have updated the manuscript to fix the typos, updated Fig. 1 (please take a look), updated Fig 2, added Section A.2 (in the supplementary material) on entropy coding, and updated Table 3, and added a new Table 4.\n- To `w64c`: We have extended the note on shifts in the updated manuscript, and added Section A.3 to the appendix on data ablations, and Section A.4 with a comparison to Liu et al.\n- To `yzCS`: We have added a new Table 4 with runtime comparisons\n- To `xvyj`: We have added Section A.4 with a comparison to Liu et al.", " As part of our reply to Reviewer xvyj, we wrote the following, which we highlight again here:\n\nWe see the following main differences between our method and Liu et al's method, that should be part of a fair ablation study:\n\n1) Amount of training data\n2) 1 vs 2 frames of context\n3) CNN vs Transformer\n\nTo figure out which of these contributes to the significant difference in rate-distortion we see between VCT and Liu et al in Fig. 4, we added an ablation study to the supplementary materials, see the newly added A. 4.\nWe copy the relevant text passage here:\n \n```\nIn Fig. 10, we compare VCT against the CNN based method by Liu et al. [20], which studies a similar setting as VCT, but uses CNNs for the temporal entropy model.\nWe provide preliminary results of reproducing Liu et al.'s work using CNNs, trained on our data (purple dot, denoted ``Preliminary CNN baseline'').\nWe can see that the baseline obtains a similar rate-distortion performance as the work by Liu et al.\nThus, similar to SFF (see Sec. A.3 above), we see that the CNN based approaches do not benefit from additional training data.\n\nThe main remaining differences to [20] are: i) they use 1 frame of context (vs. VCT's 2), ii) they rely on CNNs instead of our transformer.\nWe thus plot the model from the ablation study in Table 2, where we only use 1 frame as context, showing that this makes bitrate worse (green cross in Fig. 10, $+18%$ bitrate increase). \nFrom this, we can conclude that the transformer is responsible for the bulk of the remaining gap, ie, the bitrate increases around 50% when going to a CNN.\n```\n\n---\n\nWe hope this addresses your concern on \"Some ablation study experiments to break down the performance gap would help to understand the model\"\n", " > But for so similar paper, should we not make comprehensive and fair ablation studies? \n\nWe see the following main differences between our method and Liu et al's method, that should be part of a fair ablation study:\n\n1) Amount of training data\n2) 1 vs 2 frames of context\n3) CNN vs Transformer\n\nTo figure out which of these contributes to the significant difference in rate-distortion we see between VCT and Liu et al in Fig. 4, we added an ablation study to the supplementary materials, see the newly added A. 4.\nWe copy the relevant text passage here:\n \n```\nIn Fig. 10, we compare VCT against the CNN based method by Liu et al. [20], which studies a similar setting as VCT, but uses CNNs for the temporal entropy model.\nWe provide preliminary results of reproducing Liu et al.'s work using CNNs, trained on our data (purple dot, denoted ``Preliminary CNN baseline'').\nWe can see that the baseline obtains a similar rate-distortion performance as the work by Liu et al.\nThus, similar to SFF (see Sec. A.3 above), we see that the CNN based approaches do not benefit from additional training data.\n\nThe main remaining differences to [20] are: i) they use 1 frame of context (vs. VCT's 2), ii) they rely on CNNs instead of our transformer.\nWe thus plot the model from the ablation study in Table 2, where we only use 1 frame as context, showing that this makes bitrate worse (green cross in Fig. 10, $+18%$ bitrate increase). \nFrom this, we can conclude that the transformer is responsible for the bulk of the remaining gap, ie, the bitrate increases around 50% when going to a CNN.\n```\n\nWe think this improves our paper, and thank you for your suggestion. **If there is a specific additional ablation that would further clarify our method, please let us know.**\n\n> Moreover, you use 10x more training data than SSF, is this fair?\n\nWe note that SSF in the paper uses the same training data as VCT, apologies if this was unclear.\nNevertheless, as we show in the added A. 3, this does not have an effect on SSF\n\n> Isn't it for higher compression ratio and faster encoding/decoding speed? Other paper can compare with SOTA traditional codec, why can not your paper compare with it?\n\nIndeed, and we achieve higher compression ratios compared to previous neural codecs, while using a simplified setting. We are open about our runtime in the paper (Table 2), and we believe that our approach advances understanding of neural compression methods to a degree that is sufficient for a NeurIPS paper. If we had a method that is better and faster than VTM, we would try to sell it, not publish it as open research :)\n\n> The author says that \"the scope of this work is to show that neural codecs can be significantly simplified and that motion priors are not necessary. \" However, Liu et al. have demonstrated this.\n\nWe emphasize that we show the first neural compression paper that achieves competitive performance in this setting, see initial reply.\n", " The author says that \"the scope of this work is to show that neural codecs can be significantly simplified and that motion priors are not necessary. \" However, Liu et al. have demonstrated this.", " - The proposed method in this paper has a very similar idea to Liu et al. Both of them use the latents of previous frames to estimate the probability of the latents of the current frame. Liu et al used CNN and this paper uses transformer. I do not imply that we should retrain all past models in 2022. **But for so similar paper, should we not make comprehensive and fair ablation studies?** Moreover, you use 10x more training data than SSF, is this fair?\n- What is the purpose of developing neural codec? **Isn't it for higher compression ratio and faster encoding/decoding speed?** Other paper can compare with SOTA traditional codec, why can not your paper compare with it?", " **The overall performance comparison with Liu et al is not enough as the model from Liu et al was trained on 2020.** We compare the rate-distortion tradeoff and significantly outperform their model. Yes, that paper is from 2020, but we fail to see what else is there to do? It seems like the reviewer is implying that we should train all of the models published in the past using the machinery from 2022? \n\n**transformer-vs-CNN** As we wrote in the reply to reviewer w64c, we are happy to add this comparison.\n\n**Data** We conducted the study as part of the rebuttal and will add it to the manuscript.\n\n**VTM** As we write in the rebuttal, \"the scope of this work is to show that neural codecs can be significantly simplified and that motion priors are not necessary. \"\n\nWe thank the reviewer for detailed comments, but fail to see how the above points are grounds for dismissal of this work. The goal of our research is to sufficiently advance the state of the art, which we believe was clearly demonstrated.\n", " Thank the authors for the response. However, my concerns still exist:\n- Comprehensive comparisons between the proposed method and Liu et al. “Conditional Entropy Coding for Efficient Video Compression” ECCV 2020 should be conducted. The overall performance comparison with Liu et al is not enough as the model from Liu et al was trained on 2020. More fair ablation studies should be conducted. Just mentioned by reviewer w64c, is transformer-vs-cnn, training data, number of reference frames, or some other modules? From the framework comparison, Liu et al. used CNN to estimate the probability from the previous frame and this paper uses a transformer. The additional difference is the block-based auto-regressive entropy model, and the authors never claim it is a core contribution or novelty. \n- It is surprising to know that this paper uses 10x more data with respect to SSF. Since the authors have conducted this ablation study, why not show the results?\n- The authors do not give a convincing explanation on why not compare with HM or VTM. “Versatile Learned Video Compression” in 2021 has compared with VTM. In 2022, I think this comparison is not too much to ask.\n", " Thanks for your kind words regarding contributions and for the reply. Regarding your additional concerns:\n- **Shift Sequences**: We are in agreement, except maybe that shift sequences are a very specific \"unnatural\" sequence. We actually interpret Fig 5 in the opposite way: It is impressive how much better we do than SSF despite no explicit motion prediction and warping. \n- **Training data**: As mentioned, training data for our unsupervised compression task is abundant. We note that our \"private\" data is simply downloaded from public video streaming platforms. An example available large dataset today would be [YT8m](https://research.google.com/youtube8m/download.html).\n- **CNNs**: From preliminary results we understand that the gap is from \"transformer-vs-cnn\", since we see in the paper that more reference frames gives limited gains and we know that our CNN based baseline (SFF) did not benefit from more training data. Given points 1 and 2, would adding a transformer vs CNN comparison make you reconsider the rating?", " Thank the authors for the response, including more details and results on VCT. I appreciate the contributions in this work -- both the proposed framework and other experimental results as detailed in the original review. After careful consideration, I decided to adjust my rating based on the following reasons:\n* **Concerns regarding performance on shift sequences.** Fig 5 shows that VCT underperforms HEVC medium on shift sequences despite superior performance on natural sequences. This points out an important limitation of VCT in handling motions. Fig 5 also lacks a comparison with other SOTA methods using motion estimation networks -- including DVC and DVC Pro which are publicly available and can be easily reproduced.\n * The idea of VCT + motion-based components is not a trivial extension and is questionable as simplicity (e.g., not using motion compensation) is a main contribution of VCT. Although benchmark performance on challenging datasets is important, we should also investigate the limitations on sequences with large motion, including synthetic shift sequences.\n* **Some limitations in training.** Quantitative results in Fig 4 are not necessarily a fair comparison. SSF and VCT use extra private training data. This limits the impact of VCT on future research as it is difficult to reproduce and follow.\n* **Concerns about the novelty of \"using the latent representations of previous frames to estimate the probability for that of the current frame\", as pointed out xvyj.** What's the key of VCT to outperform this previous model? Is transformer-vs-cnn, training data, number of reference frames, or some other modules? Some ablation study experiments to break down the performance gap would help to understand the model, beyond benchmark performance.", " Thank you for your constructive and detailed comments. We believe that we have addressed the raised concerns below. Are there any remaining points to clarify? We would be happy to engage in discussions if needed.", " We thank the reviewer for recognizing and acknowledging the strengths of the proposed approach. We will address the raised questions and concerns here.\n\n1. **Parameter Counts/Runtimes**: Here is a preview of the requested parameter counts, we will add the full information to the paper: VCT: 114M // ELF-VC: 10.7M // FVC: 26M. Many neural compression methods do not detail inference time and do not have code available, but we will add the following from the top methods in Fig. 4: ELF-VC with speed as a main focus reports an impressive 35 FPS decode time on 720p, VCT obtains 6.7 FPS on 720p, DCVC reports 857ms inference on 1080p (i.e. 1.1FPS, although it's unclear if encoding is included), FVC reports 548ms on 1080p (1.8 FPS). We hope that future work will follow VCT in more clearly reporting speed and, more importantly, releasing code.\n2. **Naive solutions**: We note that Table 2 contains one naive solution, i.e., not considering previous frames, while the other naive solution, namely using a uniform distribution, is discussed in the main text. Given that the latter always requires $\\log_2 |\\mathcal S|$ bits per symbol (9MB per HD frame), or 4.5bpp, it is 20x worse than the ablation presented in Table 2. We are unsure whether this covers your question or you have another naive solution in mind?\n\nRegarding your questions:\n- **PSNR of ablations**: Without LRP, the PSNR at frame $i$ only depends on $y_i$, which we losslessly transmit. Hence, using fewer frames only increases the bitrate but doesn’t affect PSNR. As a result, even if the model is underfitting, and e.g. predicing $p \\sim U$, we would still achieve 36.1 dB PSNR, only the coding efficiency would decrease significantly.\n- **More context does not help**: (a) Your point regarding capacity is well aligned with what we observe in practice with large transformer models -- the learning problem becomes more challenging with the increase in the input sequence length required to model more contextual information. (b) At a short-temporal scale, the previous frame encapsulates most of the information necessary to predict the current frame. The main exception is the presence of occlusions, but on average one needs to see many more frames to handle occlusions (i.e. an object might be occluded for 30+ frames and then become visible again).\n- **Temporal-aware quantization**: This is a very good point and something we also considered during model development and will consider in future work. In principle, it seems sub-optimal to use independent encoders, but it does simplify the training setup.\n", " [Response Part 2]\n\n**Comparison to the checkerboard model** [6]: Our usage of block-autoregression contains important novelty: Firstly, previous work [1-7] always relies on additional side information z to condition the entropy model, whereas it is not required in our setting (noted on L156). Secondly, the key idea in [6] is to decompose latents into two sets of squares and code one set given the other. Our approach is only superficially related in that we also use squares, but instead of a two stage global factorization, we _independently_ encode each square, without relying on any neighbors. We can do this thanks to the temporal redundancy of video, i.e., we do not need to rely on global context within the current latent. Similar to [6], [7] (available only on arXiv), also splits latents into sets and codes them in sequence, which again is a different approach from what we do.\n\nDespite all this, we note that we never claim our block-autoregressive scheme is a core contribution or novelty.\n\n**“Motion prediction and warping operations are complex and hand-crafted in mainstream residual-coding based framework, and this paper does not need these designs”**: Note that apart from claiming motion prediction is complex, we also write that previous approaches “constrain themselves to work well only on data that matches the architectural biases” (L18). As a big benefit of removing motion prediction, our model works well on other data, and we see this empirically in our synthetic data (Fig. 5).\n\nIn our setup, using autoregressive constitutes a very general factorization of a joint distribution that can hardly be called hand-crafted. It is used in various ML applications such as [language modelling](https://arxiv.org/pdf/1910.10683.pdf) and [image synthesis](https://compvis.github.io/taming-transformers/). Adding LRP corresponds to a single dense layer that maps the transformer features to less channels followed by addition (L204), certainly less complex than the implementation of motion compensation. Finally, our model can be trained end-to-end (i.e., without the stages), but thanks to our modular design it’s easier to just pretrain some components, as done in most recent neural video approaches. This also for fast iteration and research, and is not an important property of the model. \n\n**“The compression ratio improvement over previous SOTA ELF-VC is limited”**: Our empirical results show that our algorithm is competitive and often outperforms SOTA with a simplified design without modeling motion. Thank you for the suggestion to include BD-rate numbers as well, we will compute them and add to the manuscript. \n\n**“Please use x264 and x265 in the experiment description”**: Good point, we will update the manuscript.\n\n**“In addition, outperforming x264 and x265 is very easy in 2022. Comparisons with JM, HM, VTM are recommended [...]”**: The scope of this work is to show that neural codecs can be significantly simplified and that motion priors are not necessary. Our next step is to scale the proposed approach and make it competitive to standard non-neural codecs. We include the ffmpeg implementations of HEVC/H.264 as a well-known baseline.\n\nQuestions:\n- Right now, we consider transformers for video compression an early research direction. We chose to report TPU numbers to show that eventually, a VCT-like approach will be feasible for deployment. GPUs are known to lag behind TPUs for attention-based architectures.\n- As mentioned on L280, we do not observe significant gains from using more context. We will add this to the ablation Table 2 to make this more visible.\n- Indeed, other splits would be possible, but we did not explore this for this paper. 16 is natural given 4x4 blocks.\n", " [Response Part 1]\n\nWe thank the reviewer on the detailed assessment of our work. We would like to first reiterate that the main focus of this work is to vastly simplify existing learned neural compression algorithms for video and show that our resulting approach outperforms previous state-of-the-art, without having to rely on motion compensation and residual compression.\n\n**Lack of novelty**: We believe that this claim is unsubstantiated and misses the main point of this work mentioned above.\n\n“Using the latent representations of previous frames to estimate the probability for that of the current frame has been explored in [1].”: As we note on L78, the authors of [1] “also losslessly encoded frame-level representations, but rely on CNNs for temporal modeling.” We compare to [1] in Fig. 4 (the approach is referred to as “Liu et al.”), and we can see that their approach performs significantly worse (>1dB in PSNR) demonstrating that our usage of transformers for temporal modeling is critical for SOTA performance. We emphasize that we show the first neural compression paper that achieves competitive performance in this setting. Furthermore, in contrast to CNNs, Transformers naturally scale to larger temporal contexts by operating on sequences of arbitrary length.\n\nIn terms of related work, [2, 3, 4, 5] focus on _image_ compression, which is orthogonal to what we are trying to achieve in this work. These works show that single-frame entropy coding based on the transformer architecture results in competitive models. In fact, given the modular design behind VCT, we could replace the frame encoding with any of these models and potentially improve the results presented in this work. As we stress in the paper (L91-98) that going from images to video is incredibly challenging. The results of conceptually simple approaches which look like straightforward extensions are presented in Table 2 which can be 20x worse wrt coding efficiency. We would also like to note that we cite [1, 3, 4], while [2, 5] appeared after the NeurIPS deadline (CVPR 2022). \n\nFinally, we note that transformer architecture itself was already ported, with minor differences with the original Vaswani et al. paper, to most problems of computer vision. In that sense, all work which applies transformers to new tasks coupled with entropy coding is by design “limited” in novelty. However, we believe that the key behind the success and popularity of models such as the [Vision Transformer](https://arxiv.org/abs/2010.11929) was to actually show that it works in this new domain and can get rid of many of the architectural priors employed previously -- like we saw in our paper. This involves a careful design of the model and the training recipe. It’s not as simple as “just using a transformer”.\n", " We thank the reviewer for the diligent review and address the remaining questions here.\n\n- **“Most modules follow the most common designs and are not curated for this specific model”**: Indeed, this is by design -- we believe that neural compression models should build on generic components which are being independently improved by other researchers (e.g. in computer vision), which we know how to train at scale, and how to debug. We believe that this is one of the main advantages of our model.\n- **“What’s the HEVC setting in Figure 5?”**: We use the medium setting, disabling B-Frames (see L255-L228).\n- **“I wonder what other models [...] would behave on the shift sequences”**: That’s a good question! We assume that methods based on powerful pre-trained optical flow networks will fall between SSF and HEVC. However, we do not have access to these codebases, but we are hopeful that such comparison will be done after our model and synthetic data loader are released.\n- **“It’s unclear what’s the cost of not using motion compensation in the VCT”**: Fig 4 shows that for _natural_ videos motion compensation is not critical, as we outperform motion-based approaches on MCL-JCV and UVG. In fact, Fig 5 shows for videos which are not explained by translation-based motion, one actually benefits from not being constraint to use motion compensation as a main component. Whether combining VCT with a motion-based component yields additional benefits will be an interesting investigation for future work.\n- **“It seems that the VCT handles shifts better when the shift is the same size as the patch size.”**: This observation is correct and we also comment on it in L255. The reason for this is that our encoder is a CNN, so it is only shift-equivariant for shifts which are multiples of the stride (16). Specifically, if the input shifts by exactly 16 pixels, the representation shifts by exactly one symbol. Any shift in [1, 15] pixels causes the representation to change in a complex way (cf. [Making Convolutional Networks Shift-Invariant Again](https://arxiv.org/abs/1904.11486)).\n- **Ablation study on the size of the training data**: We performed this ablation study and found that VCT requires approximately 10x more data with respect to SSF. We assume that it is possible to obtain similar performance as we report in the paper with less data and augmentation strategies such as sub-sampling temporally, reversing videos, shifting videos, etc., following works that use transformers for other vision tasks. That being said, given the unsupervised task of compression, training data is abundant and we leave this specific exploration for future work.\n- **HEVC dataset and metrics**: Regrettably, we do not have access to the HEVC dataset and metrics. We do believe that proper benchmarking is indeed a cornerstone of science, and to help alleviate this challenge we will be open-sourcing our synthetic data set. Regarding additional metrics, an additional advantage of the modular design of VCT is that the image encoder/decoder pair can be swapped-out. Thus, using other metrics or training a generative decoder are easy to explore.", " We thank the reviewer for careful reading and suggestions. We will update the manuscript to clarify the minor issues and typos, and discuss the major items here:\n\n**Entropy Coding**: We opted for a higher-level description due to space constraints, but we can definitely provide more detail for the benefit of the reader by providing a definition and a pointer to [Huffman coding](https://en.wikipedia.org/wiki/Huffman_coding), followed by a concrete instance with a few symbols, such as: \n\n> Let us assume symbols {A, B, C}, and P(A) = ½, P(B) = ¼, P(C) = ¼. Now, an optimal code is to assign the bitstring 0 to A, 10 to B, and 11 to C. Then, the example sequence AABC would become 001011. Note that A takes exactly $-log_2(P(A)) = 1$ bits, and B, C take 2 bits each. The entropy coding scheme we use works similarly conceptually, but is also optimal if your probabilities are not powers of 2: [Arithmetic Coding](https://en.wikipedia.org/wiki/Arithmetic_coding).\n\nNote that we will release code to reproduce the entire paper, including the entropy coding part. Please let us know whether this is sufficient.\n\n\n**Propagating errors**: The claim on L39 is correct as we transmit representations using the following steps:\n- First, given $N$ frames, we extract all quantized representations $y_1, y_2, \\\\dots, y_N$ independently using the encoder $E$ (L32). Then, we use the transformer to _losslessly_ transmit these by predicting distributions and using arithmetic coding (L35). We emphasize that if the transformer is bad at predicting distributions, it may not be efficient in terms of the number of bits used to encode $y_i$, but the transmission will nevertheless be_lossless_.\n- As described on L112-119, the receiver can use the transformer to recover $y\\_1, y\\_2, \\\\dots, y\\_N$. It then applies LRP, calculating $y’\\_i = y\\_i + z\\_\\\\text{cur}$. Note that $z\\_\\\\text{cur}$ is a function of $y\\_i, y\\_{i-1}, y\\_{i-2}$ and not a function of $y’\\_i$. Thus, we never use the previous $y’\\_{i-1}, y’\\_{i-2}, \\\\dots$ to calculate $y’\\_i$, we only use $y\\_i$, which does not contain errors. You can imagine this as a graph where information only flows in one direction, there is no connection from $y’\\_i$ to the $i+1$-th reconstruction.\n\n\nHence, the i-th reconstruction does not depend on all previous encodings and the errors are bounded, as claimed in the text.\n\n**Table 3 and Fig 1**: Regarding Fig 1, you are absolutely correct, only previous tokens of $y\\_i$ are processed to calculate the probability of the current token. However, our aim was to show a high-level overview of the information flow with respect to all tokens. We do agree that striking the right balance between high-level overviews and exact low-level operations is challenging, and could provide an alternative visualization of only a single step (where $y\\_{i,<t}$ is at the input, and $P(y\\_{i,t}|y\\_{i,<t},y\\_{i-1},y\\_{i-2})$ at the output) if that would improve the clarity. At the moment we will improve the caption to clarify this potential discrepancy. Finally, we will update Table 3 to include additional runtime information.", " The authors tackle the problem of video compression using transformers. The proposed architecture doesn't include specific biases and priors like motion prediction, blurring, etc. Transformers are used to predict the distribution of the future representation given the earlier video frames. The idea is to exploit the temporal redundancy across frames and the spatial consistency within frames which can help in entropy coding. They show good results using transformers on standard video compression data sets (MCL-JCV, UVG, synthesized videos from CLIC2020) as compared to previous approaches. Strengths:\n* There is no explicit biases or priors, which may help the proposed architecture to generalize better across datasets\n* The authors introduce independence assumptions among the blocks of each video frame which shrinks the attention matrix and enables parallel execution on subsets of the video frame.\n\nWeaknesses:\n* The paper is not easily readable as there are several inconsistencies as pointed below\n* In Fig 1, it seems that y_i is fed to the Masked Block-Autoregressive Transformer, but from the later text, it seems it only previous tokens of y_i are fed to get the probability of the current token of y_i.\n* line 207: Even though it is a bounded window, if z_cur is used to decide y'_i then y'_i does indirectly depend on all previous encodings which can propagate errors. This counters the claim made in line 39 that the architecture doesn't propagate temporal errors.\n* Entropy coding: The distribution of the next token can help identify the number of bits to encode the token. But it is not clear how one of the S symbols can be arbitrarily encoded into that many bits.\n* Availability of code or at least pseudo-code in the appendix could have helped understand how entropy coding is done in the current approach\n* Table 3: To evaluate better, the runtime of the baselines should also be presented\n\nMinor:\n* Figure 4: bits per pixel is not cited\n* line 189: what is distortion loss?\n* line 280: Even though no further gain is observed from more context, it may help if it is shown in the Table 2.\n\nTypos:\n* line 36: '2.1.1' instead of '2.2.1'\n* line 56: 'for' is redundant\n* line 108: The output of T_cur should be z_cur, but the figure has z_joint.\n* Eqn 2: round(y_i) should be round(\\tilde(y)_i) ?\n* line 192: integration is done over du ?\n\n It may help if the authors can tell how the entropy coding is done in the current context (see weaknesses above). Also if the authors can address any other weaknesses mentioned above then it can help in better understanding of the approach. Yes the authors have adequately addressed the limitations and potential negative societal impact of their work.", " The authors proposed a video compression transformer (VCT) for neural video compression. Input frames are mapped to representations, and a transformer is used to model the dependencies. The VCT model achieves SOTA performance on MCL-JCV and UVG datasets. Both theoretical and empirical results are promising and will benefit future research. **Strengths**:\n\n1. The proposed framework is simple and effective. The frames are decoded from the quantized representation, which prevents the problem of error propagation for residual coding-based video compression methods.\n\n2. The initial frames are decoded with zero padding for y_i-1 and y_i-2 so the I-frame and P-frame models in residual coding-based framework are unified into one model. This design large simplifies the video compression framework and will benefit future research.\n\n3. In order to speed up the execution, the authors assumed independence between blocks and only overlapping tokens from i-1 and i-2 are used. This seems a bold assumption at the beginning but proved effective.\n\n**Weaknesses**:\n\n1. Most modules follow the most common designs and are not curated for this specific model. Some weaknesses/improvements are considered by the authors in Section 3.2-Independence and Section 6.\n\n2. Other weaknesses and concerns are detailed in the ‘Question’ section. 1. What’s the HEVC setting in Figure 5? Since SSF is not using a pretrained optical flow model, I wonder what other models (e.g., DVCPro or DCVC that builds on SPyNet) would behave on the shift sequences. It seems that learning-based models achieve superior benchmark performance than HEVC by handling distortions such as sharpen/blur/fade better, but not as well to shifts. It’s unclear what’s the cost of not using motion compensation in the VCT framework, whether the cost is negligible or if the transformer predicts good enough distributions such that an ME module is unnecessary.\n\n2. It seems that the VCT handles shifts better when the shift is the same size as the patch size. Are the authors aware of/considering any approach to improve this problem?\n\n3. VCT and SSF are trained on different data from most other works. Although larger-scale training has been proved necessary on various language/vision tasks, is there an ablation study on the size of the training data for VCT? This would help to understand the limitation of the model, its scalability, and its potential for future work.\n\n4. A less important question: have the authors considered the HEVC dataset and other video compression metrics (e.g., AVQT or perceptual-based metrics)? As VCT is a novel architecture, it would help to understand how it compares with previous works as well as decide future directions. In this work the authors proposed VCT for neural video compression. The model design is novel and elegant and the model architecture follows most common designs. It helps to show the efficacy of the proposed framework and many improvements are also possible (as summarized in Section 6). Some possible limitations in terms of motion compensation and training data are mentioned in Question 1 and 3.", " The paper proposes a transformer architecture for video compression and demonstrates its favorable performance over prior methods. The method is explicit into two stages: firstly lossily encode each frame independently into a quantized latent space, and secondly use a transformer model to find a probability mass function under which the quantized latent sequence can be efficiently transmitted. The proposed model relies on data-driven prior instead of architectural biases as used in many prior methods. Strengths:\n1. The paper is clearly written and easy to follow.\n2. The method achieves favorable empirical performance on standard benchmarks. \n2. The paper discusses the the type of temporal patterns that the model learns to exploit by examining synthetic videos. The discussion provides a good understanding of the model's capability when applied to data sets with different motion patterns. \n\nWeaknesses:\n1. The comparisons of model size and inference time with baseline methods are not listed.\n2. The paper provides an intuitive explanation in line 108-110 of why the second stage is advantageous compared to the naive solution. The hypothesis could be tested on synthetic datasets, which will further strengthens this claim and verifies if this is indeed what the type of inductive bias that the model learns from the data. 1. In table 2, the PSNR of model variants with 0, 1 and 2 previous frames are the same. Does the author have an intuition why this is the case? \n2. The paper suggests that more than 2 context frames do not provide further gains in line 280. Is this a result of the type of motion presented in the testing datasets? With larger temporal motions in the testing data, will more frames provide additional benefits, or the it's the capacity of the current model architecture that limits more context information from being efficiently used? \n2. A potential extension of the work is to incorporate temporal-aware quantization in the first-stage training instead of processing each frame individually. The limitations and societal impact are addressed. ", " This paper uses transformer to capture the relevance between the latent representations of different frames. By using the latent representations of two previous frames as the entropy model input, a good probability estimation can be obtained for that of the current frame. In addition, an improved auto-regressive model is used, where 16 times are needed to decode a frame. \nStrength\n\n- This paper investigates the conditional entropy coding-based framework rather than following the mainstream residual-coding based framework where the motion estimation and motion compensation are needed. This is an encouraging attempt. \n- Investigate different kinds of synthetic videos including shifting, sharpen/blur, and fading. Some analyses are conducted.\n\nWeakness\n\n- The most issue of this paper is that the novelty is limited. The core contributions have been investigated in other papers.\n - Using the latent representations of previous frames to estimate the probability for that of the current frame has been proposed in [1]. The difference is that one previous frame is used in [1] and two previous frames are used in this paper.\n - Using transformer as the basic unit has been investigated in many neural image codecs, like [2,3,4,5]. Directly applying transformer into neural video codec is straightforward and the novelty is limited.\n - The block-based auto-regressive entropy model is a trade-off between the normal auto-regressive model and checkboard prior model [6]. Actually, the checkboard prior can be regarded as a kind of block-based auto-regressive entropy model, where the block contains two parts. In addition, [7] has investigated a similar idea where each block contains 4 parts. The difference of this paper is that each block contains 16 parts. \n - This paper claims that the motion prediction and warping operations are complex and hand-crafted in mainstream residual-coding based framework, and this paper does not need these designs. The block-based auto-regressive entropy model, LRP, and 3-stage training are also complex and hand-crafted designs. \n - The compression ratio improvement over previous SOTA ELF-VC is limited. The RD-curves are very close. The BD-rate numbers are best presented. \n - Actually, HEVC and H.264 only specify the decoding part and exclude the encoder part. They do not have “veryslow” or “medium” settings. Please use x264 and x265 in the experiment description. In addition, outperforming x264 and x265 is very easy in 2022. Comparisons with JM, HM, VTM are recommended because they represent the best encoders of H.264, HEVC, VVC. The work [8] in 2021 has already compared with VTM. \n\n[1]. Conditional Entropy Coding for Efficient Video Compression, ECCV 2020\n\n[2]. The Devil Is in the Details: Window-based Attention for Image Compression, CVPR 2022\n\n[3]. Entroformer: a transformer-based entropy model for learned image compression, ICLR 2022\n\n[4]. Transformer-based transform coding, ICLR 2022\n\n[5]. Joint Global and Local Hierarchical Priors for Learned Image Compression, CVPR 2022\n\n[6] Checkerboard Context Model for Efficient Learned Image Compression, CVPR 2021\n\n[7] High-Efficiency Lossy Image Coding Through Adaptive Neighborhood Information Aggregation, arXiv 2022.\n\n[8] Versatile Learned Video Compression, arXiv 2021\n - The authors use TPU to measure the training and testing time. However, TPU is not available for most researchers. Profiling in NVidia GPU is recommended, and it is easy for the comparison with other related works. \n- This paper manually uses the latent representations of 2 previous frames, why not 3 or 4 frames?\n- Each block is divided into 16 parts, why not 4, 9, 25,or 36 parts?\n No potential negative societal impact " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 5 ]
[ "wA3dKxtJEvU", "B9XQLZ0law3", "zDlwNOLegxN", "mT6BjJMtohP", "_neza8ITVhr", "iWC9e00irz", "nips_2022_lme1MKnSMb", "zDlwNOLegxN", "HOTR4h-9zqB", "HOTR4h-9zqB", "vBTejJZMCGM", "TclkYX_W1VY", "Yeu-jPG5XTy", "zDlwNOLegxN", "atMi363ppvn", "nips_2022_lme1MKnSMb", "r_6zlwehFpe", "M2HxUoy6CLC", "M2HxUoy6CLC", "7an1olKB5w_", "wA3dKxtJEvU", "nips_2022_lme1MKnSMb", "nips_2022_lme1MKnSMb", "nips_2022_lme1MKnSMb", "nips_2022_lme1MKnSMb" ]
nips_2022_sGugMYr3Hdy
Pragmatically Learning from Pedagogical Demonstrations in Multi-Goal Environments
Learning from demonstration methods usually leverage close to optimal demonstrations to accelerate training. By contrast, when demonstrating a task, human teachers deviate from optimal demonstrations and pedagogically modify their behavior by giving demonstrations that best disambiguate the goal they want to demonstrate. Analogously, human learners excel at pragmatically inferring the intent of the teacher, facilitating communication between the two agents. These mechanisms are critical in the few demonstrations regime, where inferring the goal is more difficult. In this paper, we implement pedagogy and pragmatism mechanisms by leveraging a Bayesian model of Goal Inference from demonstrations. We highlight the benefits of this model in multi-goal teacher-learner setups with two artificial agents that learn with goal-conditioned Reinforcement Learning. We show that combining BGI-agents (a pedagogical teacher and a pragmatic learner) results in faster learning and reduced goal ambiguity over standard learning from demonstrations, especially in the few demonstrations regime.
Accept
After a strong rebuttal from the authors and an extensive discussion among the reviewers, I believe the paper's pros outweigh its cons and this paper will be a valuable contribution to NeurIPS. I recommend it for acceptance and encourage the authors to address the reviewers comments for the camera-ready version of the paper, especially regarding the weaknesses of empirical evaluation and differentiation against conventional GCRL.
train
[ "3-yWVzedpHz", "FfdrJez4tMe", "iHKw0OQnXw-", "ke7_CDav0mk", "k4gkiaGMgOU", "A3DHHs4ZhK", "bPJYKXbuP_t", "D1dsynpx3-5J", "FVlDTyeYNcK", "AxZmNrudgcT", "GbYUeprZKW", "FF36DIxsfhz", "3T_UdhN98z-", "4UoW-llrhfJ", "wKwpAbWYvZi", "kvngpOfSmG", "BoiFqgiU2lK", "PEffkr4T8AI", "hohjcjdZWrC", "Bmah8rnViHc", "gMyGXvzyPsK", "hhmnN3lPUvB", "Nx17r22J8Q" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As the deadline for the end of the discussion period approaches, we would like to add an argument that may clarify the concern.\n\nIn real-world problems, inferring goals from demonstrations and more generally actions is a crucial part to exchange information and learn efficiently, as a consensus of work in developmental psychology and machine learning shows [1, 2, 3]. Indeed, most of the time agents do not communicate their goals explicitly when they act [1] and agents have to infer the goals of other agents. Humans, and even infants, particularly excel at inferring goals when observing the behavior of other persons. This allows them to understand their intention, how to achieve goals later on their own, and is generally a powerful cognitive mechanism for their development and learning. In our case, the agent receiving the demonstration does not have access to the goal of the demonstration, in order to mimic what would happen in real-life scenarios.\n\nIn order to best satisfy the reviewer, we updated the manuscript to better state the assumption that the learner does not have access to the goals of the demonstrations (and thus must infer it). In our description of the teacher/learner setup in the context of GCRL (Phase 2 in Sec.3.2), we now clearly state this assumption: \"Note that we assume that the learner does not have access to the goal of the demonstration and thus must infer it, mimicking real-life situations where humans regularly have to infer other people's goals [1]\".\n\nWe hope that these arguments will help the reviewer understand the importance of goal inference in communication between agents, and thus our motivation to apply to GCRL.\n\n\n[1]: Gweon, Hyowon. \"Inferential social learning: Cognitive foundations of human social learning and teaching.\" Trends in Cognitive Sciences 25.10 (2021): 896-910.\n[2]: Baker, Chris L., Joshua B. Tenenbaum, and Rebecca R. Saxe. \"Goal inference as inverse planning.\" Proceedings of the Annual Meeting of the Cognitive Science Society. Vol. 29. No. 29. 2007.\n[3]: Dik, Giel, and Henk Aarts. \"Behavioral cues to others’ motivation and goal pursuits: The perception of effort facilitates goal inference and contagion.\" Journal of Experimental Social Psychology 43.5 (2007): 727-737.", " We warmly thank the reviewer for constructively asking us to better put forward our most important results so as to make the paper more convincing. First, the reviewer expresses the concern that the current set of results may only \"demonstrate that the pedagogical demonstrations with goal inference improves sample efficiency but not necessarily overall accuracy or only does so incrementally\". On that specific point, we agree that only looking at Fig. 3 p. 8, the reader may not be able to sort out whether the gain is in sample efficiency or goal accuracy: on the low data regime (100 demonstrations) we see that the literal learner fails to reach more than 60% of goals, but is it because it wrongly inferred the goals or because its policy is not trained enough? The answer to that is in Table 1, p. 7. By sorting out GIA (Goal Inference Accuracy) and GRA (Goal Reaching Accuracy), we can see that the literal learner mainly fails because it wrongly infers the goal.\n\nTo better stress this point, we have edited the caption of Fig.3.\n\nInstead of \"Results for FBS environment (Goal Reaching Accuracy (GRA) with different numbers of demonstrations per goal). Stars indicate significance (tested against naive+literal).\",\n\nwe now have:\n\n\"Global learner performance in the FBS environment (Goal Reaching Accuracy, GRA) with different numbers of demonstrations per goal. Table 1 shows that the drop in GRA under the few demonstrations regime (right) is mainly due to incorrect goal inference in the literal learner or with the naive teacher. Stars indicate significance (tested against naive+literal).\"\n\nWe hope that this change will better highlight the outcome of our empirical study.\n\nBeyond this specific point, the reviewer is advising us to better stress the importance of our results. Actually, without calling upon a more complicated example, with Fig. 3 we already have an example where the literal learner paired to the naive teacher succeeds much less than the pragmatic + pedagogical pair. In the low demonstration regime, imagine that when you teach your robot, it fails 50% of the time versus 10% of the time with the pragmatic + pedagogical pair. That already makes a concrete and significant difference.\n\nDue to page limits (we already had to edit some other text so that our additions fit in), we could not add a whole paragraph to better stress this point, but we slightly edited the conclusion (mentioning the increase in global performance, see p.9) to make the point clearer. If the paper is accepted and benefits for an additional page, we can add the following sentence if the reviewers agree on that:\n\n\"In real world applications such as teaching a robot where human demonstrations are costly, the pair of pedagogical+pragmatic inference mechanisms may increase the learner's performance by several tens of percent, which can make a real difference.\"\n\nWe hope we have answered the reviewer's concerns and we are ready to discuss further if anything else needs to be raised.", " \nThank you very much for the extra experiments and the added clarity in the paper, it's resolved some of the issues I had from my end and improved the presentation My biggest concerns are still over the scale of the contribution that follows from the results and whether this is clear to the reader.\n\nMainly, what I'd like to see is a clearer demonstration of how the pedagogical+pragmatic teacher/learners can be expected to lead better goal finding agents generally. The authors comment: \n\n> Sample efficiency is one benefit of improving goal communication between the teacher and the learner, but we infer other benefits that \n> could arise in different setups: ... [example of picking up a hot and cold plate]\n\nWould it be possible to demonstrate such an example in the data? In general, the current set of results seems mainly to demonstrate that the pedagogical demonstrations with goal inference seems to improve sample efficiency but not necessarily to improve overall accuracy or only to do so incrementally. This is an important result but I believe if the strength of the contribution is to stand on that it should be better explained why this is an important result (e.g. due to the cost of obtaining demos in these domains) or even better to show some set of tasks or environments where this approach leads to an overall performance benefit via this type of goal inference. I think in general the results section could use improvement in this way in order to strengthen the contribution and better contextualize the work.\n\nIn summary, I'm on the fence and I do like this approach but I think the results need to better demonstrate the importance of the claims. \n", " We thank the reviewer for asking for additional clarifications. As we hope our example and explanation have now made clear, the teacher knows what the goal of the demonstration is, but the learner does not. For instance, does the teacher want to demonstrate how to draw a curtain, or to hide the object that is behind?\n\nThe goal being not communicated to the learner, the learner cannot fill the \"goal part\" of a goal-conditioned policy as it would do when using standard goal-conditioned learning from demonstration, and the learner needs to infer the goal to appropriately fill this goal part.\n\n**So yes, we are using GCRL, but we further assume that the teacher does not communicate the goal when performing the demonstration, which results in a more complicated setting.** We believe this setting to be highly relevant in future scenarios where human users will teach robots from demonstration with robots that cannot interpret language, as in such cases the user will not be able to directly communicate the goal while performing the demonstration.\n\nWe hope this further clarification will help the reviewer, otherwise please keep this conversation running as we believe we can reach mutual understanding before the end of the discussion period.", " Hi Authors,\n\nI appreciate many of the changes and efforts you've made in the paper to make it clear. I think the idea is now much clearer to me but I'm still not convinced by the setting where we have demonstrations but do not know the goals. \n\nFor general RL tasks, learning from demonstration without knowing the goals seems reasonable, so the IRL research is developed to infer the reward mechanism (as general 'goals'). However, in goal-conditioned R, it seems unnatural for me that we only have demonstrations but don't know the exact goals. In other words, this paper does not work in normal GCRL settings, right?\n\nHence, my core question remains: why is it a must to infer the goals in GCRL? Could the authors please explain more? \n\nI'll raise my score to an acceptance if my concern can be well-addressed.\n\n", " Thank you again for your review. We wanted to quickly follow up and see if our response adequately addresses your questions and comments, or if you have additional concerns.", " Thank you again for your review. We wanted to quickly follow up and see if our response adequately addresses your questions and comments, or if\nyou have additional concerns.", " Thanks, I understand the goal ambiguity of the DTB environment now!", " Regarding the additional questions:\n\n- If the paper is accepted, we will get an additional page, which we can use to add the OGIA results in Table 2. For now we lack space, and the program chairs explicitly told us that we cannot go over the 9 pages limit during the rebuttal.\n\n- About the DTB environment, here is a clarification: there are three ways to reach goal 1 ((orange, orange), (pink, orange) and (orange, pink)). This goal is characterized by a sound (let's say sound 1). Additionally, there is only one way to reach goal 2 ((orange, pink)) and this goal is characterized by sound 2. This means that doing (orange, pink) allows the agent to reach both goal 1 and 2 (both sound 1 and 2 will be played). This is the source of ambiguity which will make a naive teacher confusing for a learner, and that the pedagogical teacher will avoid (by avoiding the demonstration (orange, pink) when demonstrating goal 1 for instance).\n\nWe thank the reviewer again for their helpful review.", " Thank you for addressing all of my comments and taking the time to run additional experiments! I think I will keep my score at 8 given the criteria for 9 and 10. \n***\nAdditional Questions:\n* Would it be possible to add OGIA results to Table 2 for each ablation?\n* I'm still a bit confused by the DTB environment example in Figure 5. Goal 1 is (orange, orange) and Goal 2 is (pink, orange). How does (orange, pink) activate both Goals 1 and 2?", " Other weakness/points discussed:\n- The environment seems relatively simplistic but the agents need to learn to 1) manipulate blocks precisely (a control task), 2) put the blocks in the correct configurations (a goal-oriented task). For a Reinforcement Learning benchmark, this seems sufficiently complex to make our points about pedagogy and pragmatism. Indeed, this type of benchmark has been used extensively in the past in important papers [3,4].\n- The relative importance of having a pragmatic learner over a pedagogical teacher is only noticed in the few-demonstrations regime in Fig.3, when there are only 100 demonstrations per goal. In this specific regime, the relative importance of pragmatism over pedagogy could be due to the reward shaping scheme that allows the learner to train itself to predict the goals it reaches, and thus less focus on using demonstrations to learn the task. \n\n\n[1]: Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5), 469-483.\n\n[2]: Zhao, Xuan, and Bertram F. Malle. \"Spontaneous perspective taking toward robots: The unique impact of human-like appearance.\" Cognition 224 (2022): 105076.\n\n[3]: Andrychowicz, Marcin, et al. \"Hindsight experience replay.\" Advances in neural information processing systems 30 (2017).\n\n[4]: Nair, Ashvin, et al. \"Overcoming exploration in reinforcement learning with demonstrations.\" 2018 IEEE international conference on robotics and automation (ICRA). IEEE, 2018.\n\n[5]: Belitser, Eduard, and Subhashis Ghosal. \"Adaptive Bayesian inference on the mean of an infinite-dimensional normal distribution.\" The Annals of Statistics 31.2 (2003): 536-559.\n\n[6]: Hu, Zixi, Zhewei Yao, and Jinglai Li. \"On an adaptive preconditioned Crank–Nicolson MCMC algorithm for infinite dimensional Bayesian inference.\" Journal of Computational Physics 332 (2017): 492-503.\n", " We thank Reviewer 3RPR for their helpful review and positive assessment of our work. We first address each question, and then go over other weaknesses that the reviewer pointed out. \n\n**Question 1**: The main point of Sec. 5.3 is to show that sample efficiency is increased when learning with demonstrations and Reinforcement Learning (with SQIL) compared to Reinforcement Learning only – that is, without demonstrations. This ablation is meant to verify that using a teacher is useful. We want to know: if the learner happens to have access to a teacher, is it beneficial for the learner to use this teacher or is it more efficient for the learner to learn on its own? The experiment shows that learning from demonstration helps. This is a common result in the Learning from Demonstrations literature: demonstrations improve sample efficiency. To answer the reviewer’s question regarding the proposed experiment: the reviewer suggested considering the cumulative teacher+learner budget (Budget 1) versus the budget of the learner with no demonstrations (Budget 2). In our case, Budget 1 is twice Budget 2, which would obviously make the learner trained without demonstrations more sample efficient compared to the teacher+learner combination. As far as we know, in the learning from demonstration literature [1], the computational cost of the teacher is not taken into account, which is why we did not count it. Most of the time, the teacher’s policy is even hardcoded or derived from human data. Also, a teacher may train several learners, in which case one should not count the overhead cost for each learner.\n\n**Question 2**: In our work, we do not study whether the policies obtained by the learners are ambiguous or not. Only the policies of the teachers matter: they are the ones that are used to provide demonstrations, which will be naive if there are ambiguities or pedagogical if there are no ambiguities. The story would be different if we were “closing the loop”, that is having the teacher interpreting whether the learner addresses the intended goal from its trajectories, but this richer class of problems is left for future work.\nIn a higher dimensional space, e.g. with visual input, other types of ambiguities can emerge. For instance, a “6” seen upside down is a “9” [2], which can create another type of ambiguities (here related to perspective) that have been studied in Human-Robot interaction. Another notorious source of ambiguities is language, which we discuss in our related work section. More generally, the source of possible ambiguities is large and diverse, and we do not claim that our particular implementation of BGI would directly scale to such complex problems. However, the concepts proposed in the paper (making a teacher become pedagogical via own goal inference) can be applied to such problems.\n\n**Question 3**: We thank the reviewer for noticing a limitation of our work we did not consider. The BGI computation comes with a computational cost of performing policy inference, conditioned with all possible goals (here 35). Thus, performing Bayesian Goal Inference over a possibly infinite set of goals doubtlessly raises issues. Our method is not directly applicable to such settings, and modifications are required to cover this case. There has been lots of work on Bayesian Inference in infinite dimensions (see e.g. [5,6]). Another option would be to switch from Bayesian Goal Inference to the baseline called GPNN in the paper (Section 5.4). We thus updated the paper to put forward a relative merit to this alternative in the case of infinite goal spaces. Regarding the possible experiments about adding more goals or experimenting with GPNN instead of BGI: while this is possible in principle, in practice we could not perform those experiments given the 7 days of the rebuttal. However, we agree with the reviewer that these experiments would increase the quality of the paper and we will perform them between the end of the rebuttal and the final notification. If the paper is accepted, we will add the results to the final version, provided that we get a clear message out of them.", " Minor points:\n1. We thank the reviewer for noticing a limitation of our work we did not consider. Indeed, performing Bayesian Goal Inference over an infinite set of goals doubtlessly raises issues. Thus, our method is not directly applicable to such settings, and modifications are required to cover this case. There has been lots of work on Bayesian Inference in infinite dimensions [4,5]. Another option would be to switch from Bayesian Goal Inference to the baseline called GPNN in the paper (Section 5.4). We thus updated the paper to put forward a relative merit to this alternative in the case of infinite goal spaces.\n2. The additional rewards provided when activating pedagogy and pragmatism are one of the core elements of our approach. Thus, the ablation study of using the additional pedagogical reward for the teacher is the naive teacher, and the ablation study of using the additional pragmatic reward for the learner is the literal learner. Our results show that they indeed provide a significant boost of performance. We also experimented with different values for these additional rewards and did not see any major change. To make that clear, we mentioned this in the updated manuscript. \n3. As the stars in Fig.3 shows, we tested for the statistical significance of our results using the software from [6], which ensures that enough random seeds were used. Nevertheless, to satisfy the reviewer's request, we ran 5 additional seeds in Fig.3 and updated the paper accordingly. The conclusions remain unchanged.\n4. We apologize for any ambiguity in our writing. Here the plural was used as a reference to all experiments that were performed in the paper. We clarified this in the paper by changing the sentence to singular. In terms of proof-reading mistakes, we ran the paper in a grammar and spelling checker software to ensure to reduce such errors as much as we can. We would be grateful to the reviewers for pointing out any remaining mistakes if they find any.\n\n[1]: Gweon, Hyowon. \"Inferential social learning: Cognitive foundations of human social learning and teaching.\" Trends in Cognitive Sciences 25.10 (2021): 896-910.\n\n[2]: Baker, Chris L., Joshua B. Tenenbaum, and Rebecca R. Saxe. \"Goal inference as inverse planning.\" Proceedings of the Annual Meeting of the Cognitive Science Society. Vol. 29. No. 29. 2007.\n\n[3]: Dik, Giel, and Henk Aarts. \"Behavioral cues to others’ motivation and goal pursuits: The perception of effort facilitates goal inference and contagion.\" Journal of Experimental Social Psychology 43.5 (2007): 727-737.\n\n[4]: Belitser, Eduard, and Subhashis Ghosal. \"Adaptive Bayesian inference on the mean of an infinite-dimensional normal distribution.\" The Annals of Statistics 31.2 (2003): 536-559.\n\n[5]: Hu, Zixi, Zhewei Yao, and Jinglai Li. \"On an adaptive preconditioned Crank–Nicolson MCMC algorithm for infinite dimensional Bayesian inference.\" Journal of Computational Physics 332 (2017): 492-503.\n\n[6]: Colas, Cédric, Olivier Sigaud, and Pierre-Yves Oudeyer. \"A hitchhiker's guide to statistical comparisons of reinforcement learning algorithms.\" arXiv preprint arXiv:1904.06979 (2019).", " We thank Reviewer 4LZc for their insightful review and comments. Below we first provide answers to major points, then to minor points.\n\nMajor points:\n\n1. We thank the reviewer for pointing that the motivation of our work was not clear enough. The motivation is as follows: In real life, when a teacher shows how to do something with a demonstration, the agent receiving the demonstration does not have access to the intended goal of the demonstration: it must infer this goal. In many situations, a single demonstration may be correctly interpreted as demonstration of a variety of goals – we call this goal ambiguity. In that case, the teacher may use pedagogy to help the learner infer the intended goal, and the learner may use pragmatism in order to increase its chance to infer the right goal. This is the situation we address in this paper, with an example given in the first line of the introduction. We have now added this more explicit explanation to the introduction of the revised version of the manuscript.\n2. All of our agents do use Hindsight Experience Replay in their Reinforcement Learning process. This is specified in Section 4.1. This is also the case for all of our baselines in Table 3.\n3. We agree that the citation is not correct, we modified it to point to the precise GCRL algorithm we used in our paper. We also added a mention to the name of the algorithm and where to find technical details about it in the updated version of the paper we uploaded. Moreover, we updated Section 4.1 by adding a complete description of how the GCRL algorithm works.\n4. By inferring the goal of the demonstration provided by the teacher, the learner can match the demonstration with the goal of the demonstration, such that the learner can actually learn from this demonstration. Otherwise, the learner does not know the goal of the demonstration, with the hypothesis we discussed in point 1 (the agent does not have access to the goal of the demonstration, and thus must infer it).\n5. Our experiments are indeed empirical. Given that we use neural networks to learn to master the goals, we cannot prove for instance that the pedagogical teacher necessarily reduces ambiguities due to mathematical unsolved problems on neural network convergence. However, our empirical evaluation of this phenomenon leaves no doubt about this, see Section 5.1 (Ambiguity Score results). Unfortunately, this shortcoming of not being able to provide theoretical guarantees also applies to most Deep Reinforcement Learning algorithms. ", " Other weakness/points discussed:\n\n- New goals are discovered by the learner through exploration while pursuing a goal. Indeed, the Reinforcement Learning algorithm (SAC) leverages exploration to occasionally discover new goals. These new goals can then be demonstrated by the teacher, such that the learner can master them. This is specified in Alg.1, and in Phases 1 and 2 in Section 3.2. Regarding how goals are selected, the teacher randomly selects goals to demonstrate among the goals discovered by the learner. We added the mention of randomness in goal selection in Alg.1 to clarify this point.\n- We would love to include more details about architectures, training (which are provided in full details in the appendix) in the main paper. Unfortunately we clearly lack space. However, if the paper is accepted, we will gain an additional page, which we will use to add those details by simply moving them from the appendix to the main paper. We already prepared this novel version of the paper, but we were informed by the Program Chairs that we cannot upload a 10-page version of the paper during the rebuttal.\n- Our results show increased sample efficiency when using pedagogy and pragmatism (resulting from the reduced amount of goal ambiguity in the demonstrations). Sample efficiency is one benefit of improving goal communication between the teacher and the learner, but we infer other benefits that could arise in different setups: if the goal is miscommunicated, there might be catastrophic consequences in real-life scenarios (grabbing a cold plate versus grabbing a very hot plate while cooking for instance). About the choice of 95% threshold, it is indeed arbitrary, we just wanted to illustrate the difference in sample efficiency, see the answer to Question 3.\n- Regarding insufficient baselines, the baseline considered in Section 5.5 are there to justify the use of our modified version of SQIL instead of another learning from demonstration method. We simply show that even with pedagogy and pragmatism, the three baselines considered (learning from demonstrations only, behavioral cloning and the original version of SQIL) do not reach a sufficient Goal Reaching Accuracy to master all goals in our teacher/learner experimental setup. Thus, it is legitimate to base our results on our modified version of SQIL, at least for this environment (Fetch Block Stacking). If we agree on the fact that pedagogical/pragmatic combination clearly improves upon naive/literal combination, then we agree on the main point of our paper. \n- Regarding experimenting with less than 100 demonstrations per goal, we agree with the reviewer’s comment, which led us to perform the experiment of Fig.3 with 10 demonstrations per goal. We obtained the following results: the hierarchy of methods is respected (pedagogical + pragmatic > naive + literal), but none of the approaches were able to master all goals. This is certainly due to the SQIL method we use, which forces a 50/50 split of demonstrations versus collected trajectories by the learner in the buffer of the Reinforcement Learning algorithm. Hence, this does not change our main claim around pedagogy and pragmatism being helpful when training with demonstrations. We added the figure in the appendix, and added a comment about those results in the main paper.\n- Results about DTS can be moved in the main paper, at least shortly, if the paper is accepted given the additional page that we will get. We already prepared this novel version of the paper, but we were informed by the Program Chairs that we cannot upload a 10-page version of the paper during the rebuttal.\n\n[1]: Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5), 469-483.\n", " We thank Reviewer yuXv for their thorough and helpful review. We first answer questions and then go over all other weaknesses raised in the review.\n\n**Question 1**: The reviewer wonders what happens if we cannot ensure that all goals are mastered/solved/achieved by the teacher over the dataset. In our work, we need to assume 100% mastery of goals from the teacher, and we train it until it reaches a Goal Reaching Accuracy close to 100%. If the teacher is not able to master a goal, then its demonstrations for this goal are not helpful, and since the learner and teacher share the same architecture and learning process for exploration, there is no hope of having the learner master that particular goal if the teacher cannot.\n\n**Question 2**: The reviewer rightfully points to two missing flags in Alg. 1. Indeed, the flags “learner is pragmatic\" and \"teacher is pedagogical” refer to two booleans that are true if the learner is pragmatic (respectively if the teacher is pedagogical) and false if the learner is literal (respectively if the teacher is naïve). These two boolean numbers correspond to “pragmatic_learner” and “pedagogical_teacher” variables in the codebase provided in the supplementary. We added these variables in the initialisation phase of Phases 1 and 2 in Alg.1 and clarified the sentences “learner is pragmatic” and “teacher is pedagogical” to refer to these variables. We thank the reviewer for having spotted this missing information.\n\n**Question 3**: About the dip for the learner in Fig. 4, we thank the reviewer for catching an error we made in the plot. Indeed, there was a bug in the data used for the plot in Figure 4. Only one seed was used instead to compute the mean for the yellow curve (learner trained without demonstrations), and the error bars were erroneous. We should have seen that each learning curve should be monotonically increasing. We rectified this error in the updated manuscript by plotting the correct data. The curve now makes sense, and does not dip anymore. Regarding the point of this experiment, the conclusions remain the same. The main point of Sec. 5.3 is to show that sample efficiency is increased when learning with demonstrations and Reinforcement Learning (with SQIL) compared to learning without demonstrations and Reinforcement Learning only. This ablation is meant to verify that using a teacher is useful.\n\n**Question 4**: About the computational overhead from the pre-training process, it is the same as the computational cost of training the learner without demonstrations. In a setup with a teacher teaching a learner to master goals using demonstrations, we do not consider the computational cost of training the teacher. As far as we know, in the learning from demonstration literature [1], the computational cost of the teacher is not taken into account, which is why we did not count it. Most of the time, the teacher policy is even hardcoded or derived from human data. In Section 5.3, the goal is to show that learning with demonstrations is superior to learning without demonstrations, which verifies that our learning setup indeed benefits from using demonstrations. The rest of the paper involves showing that adding pedagogy and pragmatism is beneficial too.\n", " Other weakness/points:\n\n- Results about OGIA: these results are not provided in tables or figures but rather in the text. You can find the results in Section 5.1 for the teacher (95.6% for pedagogical teachers and 83.4% for naïve teachers), and Section 5.2 for learners (average of 12.4% increase in OGIA for pragmatic learners over literal learners).\n\n- We added “in the FBS environment” in the beginning of Section 5.1, please refer to the uploaded new version of the paper.\n\n- In the DTB environment, there are three outcomes (pairs of balls) that achieve goals, out of the 2^3=8 outcomes. Indeed, the sentence is confusing, so we corrected it in the updated version of the manuscript. \n\n[1]: Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5), 469-483.\n\n[2]: Sukhbaatar, S., Lin, Z., Kostrikov, I., Synnaeve, G., Szlam, A., & Fergus, R. (2017). Intrinsic motivation and automatic curricula via asymmetric self-play. arXiv preprint arXiv:1703.05407.\n\n[3]: Cole, Michael. \"The zone of proximal development: where culture and cognition.\" Culture, communication, and cognition: Vygotskian perspectives 146 (1986).\n\n[4]: Akakzia, Ahmed, et al. \"Grounding language to autonomously-acquired skills via goal generation.\" arXiv preprint arXiv:2006.07185 (2020).\n\n[5]: Fournier, Pierre, et al. \"Clic: Curriculum learning and imitation for object control in non rewarding environments.\" IEEE Transactions on Cognitive and Developmental Systems 13.2 (2019): 239-248.\n\n[6]: Duan, Yan, et al. \"One-shot imitation learning.\" Advances in neural information processing systems 30 (2017).\n\n[7]: Kim, Beomjoon, et al. \"Learning from limited demonstrations.\" Advances in Neural Information Processing Systems 26 (2013).", " We thank Reviewer ozr3 for their helpful review and for their very positive appreciation of our work. We have treated the minor mistakes pointed out by the reviewer and we answer their main questions below.\n\n**Question 1**: The reviewer wonders whether having the teacher mastering all goals is mandatory in our work. In the scope of the paper, it is: we only consider the case where the teacher masters all goals before trying to teach anything to the learner, which is the classical setup in Learning from Demonstrations [1]. Having the teacher learning to reach some goals simultaneously with teaching would make our work closer to Asymmetric Self Play [2], but this would raise additional issues that we do not consider here.\n\n**Question 2**: The reviewer wants to know more about the goal selection strategy. In this work we consider a random goal selection among discovered goals, as having a curriculum over goals is not our focus. In practice, the first goal is the goal representing all blocks far apart, which any agent can achieve because it is the starting state in the environment. This goal acts as a placeholder for the goal-conditioned policy so that the agent starts from scratch. Then, the agent can actually discover new goals via exploration. If the first goal was chosen strategically (two cubes next to each other for instance), then the teacher would sample this goal more often at the beginning of training, and the learner would eventually get demonstrations for this goal early on during training. This might increase the learner’s performance. Indeed, strategically choosing which goal to follow and demonstrate first is a way to provide a curriculum of goals. Providing a curriculum of goals would impact the learner's performance, but would not impact the ambiguity of demonstrations, and is thus orthogonal to the concepts of pedagogy and pragmatism which are the core interests in our work. To conclude, introducing a curriculum of goals (which might be a careful selection of increasingly harder goals to provide a scaffolding strategy) could be an addition to our work in the future, but is not our focus here. Notably, many works related to agents learning from demonstrations provide goals which are in the “Zone of Proximal Development” (a concept derived from developmental psychology [3]), i.e. neither too easy nor too hard for the agent to learn [4,5].\n\n**Question 3**: The reviewer wants to know whether increasing the number of demonstrations would make GPNN perform better compared with BGI. The question being legitimate, we ran the experiments to provide an answer based on actual data. We trained a GPNN on 5000 demonstrations per goal, for both naive demonstrations and pedagogical demonstrations. We obtained the following results in terms of GIA: 75.6% for naive demonstrations (compared to 76.0% for BGI) and 87.4% for pedagogical demonstrations (compared to 84.7% for BGI). Hence, with 5000 demonstrations, GPNN scores on par with BGI for naive demonstration, whereas it marginally surpasses it for pedagogical demonstrations. The conclusion from this experiment remains untouched, BGI remains the right option when there are few demonstrations. Note that 5000 demonstrations per goal equals 175k total unique demonstrations, which is a lot considering that most Learning from Demonstrations methods try to maximally reduce the number of demonstrations used [6,7].\n", " We thank all reviewers for providing a thorough evaluation of our work. Their constructive remarks have helped us improve the quality of our work, as can be seen in the revised version that we just uploaded. Our response to each reviewer below is self-contained, but here we provide a general answer to comments that were found in multiple reviews.\n\n1) We updated the manuscript with several modifications thanks to the reviewer's comment. They all appear in red.\n\n2) Two reviewers raised a point about counting the training of the teacher as an overhead of computation cost regarding the experiments in Section 5.3 (Figure 4). As far as we know, in the learning from demonstration literature [1], the computational cost of the teacher is not taken into account, which is why we did not count it. Most of the time, the teacher policy is even hardcoded or derived from human data. In Section 5.3, the goal is to show that learning with demonstrations is superior to learning without demonstrations, which verifies that our learning setup indeed benefits from using demonstrations.\n\n3) Two reviewers also pointed out that our method might not scale to an experimental setup with a greater number of goals, and possibly an infinite amount of goals. We thank the reviewers for noticing a limitation of our work we did not consider. Indeed, performing Bayesian Goal Inference over an infinite set of goals doubtlessly raises issues. Thus, our method is not directly applicable to such settings, and modifications are required to cover this case. There has been lots of work on Bayesian Inference in infinite dimensions (see e.g. [1,2]). Another option would be to switch from Bayesian Goal Inference to the baseline called GPNN in the paper (Section 5.4). We thus updated the paper to put forward a relative merit to this alternative in the case of infinite goal spaces.\n\n4) We rectified a number of other important points in the paper, which we list here:\n\n - There was an error on Figure 4 regarding the plotted data, which caused a dip in performance of the learner without demonstrations. The data used for the plot was not correct. We corrected it, and the conclusions of the experiments remain unchanged. We thank Reviewer yuXv for noticing this error.\n - We clarified Alg.1 by providing details about the pedagogical and pragmatic boolean variables used in the algorithm, as well as the random goal selection process among discovered goals. Upon acceptance, the paper can be 1 page longer, so in order to fulfill the reviewers' requests we will update the paper with content that is currently in the appendix: 1) details about architecture and training, 2) a summary of results on the Draw Two Balls environment.\n - There was a concern on the number of seeds used in the main experiments (Section 5.1 and 5.2). As the stars in Fig.3 show, we tested for the statistical significance of our results using the software from [3], which ensures that enough random seeds were used. Nevertheless, to satisfy the reviewer's request, we ran 5 additional seeds in Fig.3 and updated the paper accordingly. The conclusions remain unchanged.\n - There was some concern that the GPNN baseline would significantly outperform our BGI method if we were to use more demonstrations in Section 5.4. We ran additional experiments with 5000 demonstrations per goal and showed that GPNN does not significantly outperform BGI.\n\nWe again thank the reviewers for their work. If any reviewer feels that our answers have not correctly addressed their point, we would be delighted to answer a further set of questions. If the reviewers feel that our answers are satisfactory, we would appreciate if they would consider raising their score.\n\n[1]: Belitser, Eduard, and Subhashis Ghosal. \"Adaptive Bayesian inference on the mean of an infinite-dimensional normal distribution.\" The Annals of Statistics 31.2 (2003): 536-559.\n\n[2]: Hu, Zixi, Zhewei Yao, and Jinglai Li. \"On an adaptive preconditioned Crank–Nicolson MCMC algorithm for infinite dimensional Bayesian inference.\" Journal of Computational Physics 332 (2017): 492-503.\n\n[3]: Colas, Cédric, Olivier Sigaud, and Pierre-Yves Oudeyer. \"A hitchhiker's guide to statistical comparisons of reinforcement learning algorithms.\" arXiv preprint arXiv:1904.06979 (2019)", " This paper leverages the ideas of pedagogy and pragmatism for training goal-conditioned RL agents in multi-goal environments with demonstrations. Pedagogy refers to the ability of the teacher to provide non-ambiguous demonstrations to the learner, whereas pragmatism refers to the learner’s ability to discern which goal the teacher intended to demonstrate. The algorithm consists of two phases: 1) a goal-conditioned teacher policy is trained using off-policy trajectories. In addition to the reward from the environment, the teacher policy receives a reward for correctly inferring the intended goal of its own trajectory using Bayesian Goal Inference (BGI). 2) The teacher provides demonstrations to a new learner policy, which receives rewards from the environment and additional rewards for correctly inferring the intended goal of the teacher's demonstrations, and the intended goal of its own trajectories. The experimental results show that pedagogical teachers and pragmatic leaners are useful when learning from demonstrations. ### Originality\nThe related work section is comprehensive, and I appreciate the authors breaking the different areas into different sections. This work seems to be a novel combination of pedagogy, pragmatism, BGI, and goal-conditioned RL in multi-goal environments.\n\n### Quality\nThe experimental results were presented well and clearly addressed the contributions stated in the introduction. The authors do a great job of including specific information such as architecture, number of random seeds, mean, variance, etc. \n\nThe questions that the experiments aim to answer seem to be 1) can the learner infer goals from the teacher, 2) can both the learner/teacher infer goals from their own trajectories, 3) can the learner reach all goals, and 4) can the learner both predict goals and reach them.\n\n$GIA$ addresses 1, $OGIA$ addresses 2, $GRA$ addresses 3, and $GIA \\times GRA$ addresses 4. The results for $GIA$, $GRA$, and $GIA \\times GRA$ are included in Table 1, but I did not see the results for $OGIA$?\n\n### Clarity & Significance\nOverall, the paper is very well written and easy to follow. The authors motivated the work in an intuitive way, and also highlighted potential applications. The results seem to be significant, especially for areas like robotics where acquiring demonstrations can be expensive. Researchers can easily extend this approach to use pre-existing policies for the teachers, then further refine them to be pedagogical. \n\n* In Section 5.1, it is implied that the experiments are referring to Fetch Block Stacking, but I think this should be stated in the first sentence for clarity. E.g., \"We verify that the pedagogical teacher can indeed better predict goals from its demonstrations compared to a naive teacher 'in the FBS environment'\".\n* I don't completely understand how the Draw Two Balls environment is defined. If there are purple, orange, and pink balls, and two consecutive draws, how are there only three possible outcomes?\n* Nits\n * I think the second line of Figure 5 should say \"goal 2 (activation of sound 2)\"\n * Line 85: remove the comma\n * Line 119: add comma \"Finally,\"\n * Is the teacher's ability to master all goals in the environment a requirement for the use of this algorithm? Were any experiments done to see if training both the student and teacher in a cooperative manner had better results (essentially combining phases 1&2 together)? \n* How is the first goal chosen in phases 1 and 2? Could strategically choosing the first goal impact the performance of the teacher and student?\n* Do the authors think that increasing the number of demonstrations would make GPNN perform better compared with BGI? The limitations were well addressed in Section 6. ", " * This work focuses on the problem of learning from teacher/learner demonstrations and in particular focus on training a teacher/learner setup where goal ambiguity can be resolved to improve the quality of the demonstrations and to thus improve effective goal inference and task performance. Multi-goal environments are utilized.\n* The authors make use of concepts from cognitive science, pedagogy: optimization of teaching concepts; Pragmatism: resolution of ambiguities of intention from the teacher. \n* With this in mind the authors aim to tackle the problem of goal ambiguity: given that at least two goals are reachable in a given trajectory can reach at least two goals -> can't reliably predict pursued goal. The authors aim to improve demonstrations by resolving goal ambiguity and learning good goal inference.\n* Teacher/Learner share common goal / state / action spaces and use separate policies. Communication between the two is facilitated via teacher demos, learner's inferred goal, teacher feedback on inference.\n\nTraining proceeds roughly as follows:\n\n1. Teacher is pre-trained in the env to master/solve all goals with Goal Conditioned RL (GCRL)\n2. Learner infers goal from teacher's demo using Bayesian goal inference (BGI), teacher provides feedback, learner rewarded for correct predictions\n\nThe teacher tries to infer its own goal (with BGI) from trajectory after reaching it. It adds a \"pedagogical\" reward to the trajectory if so which aids in retaining high performance on the task.\n\nResults measured over the Draw-Two-Balls (DTB) and Fetch-Block-Stacking (FBS) environments. Four metrics are collected:\n\n1. Goal inference accuracy (GIA) - learner can infer G from d\n2. Own goal inference acc - teacher can infer G from its own demonstrations\n3. Goal reaching acc (GRA) - can learner reach all goals?\n4. GIA x GRA - can the learner both infer its goal and reach it\nAmbiguity Score\n\nThe main claim of the paper is that using a pedagogical and pragmatic teacher/learner setup to improve demonstrations through goal disambiguation will lead to to faster learning with fewer demonstrations.\n **Strengths**:\n\n* The authors aim to tackle an interesting problem of leveraging teach/learner demonstrations which are important in a variety of domains. * The approach they take to disambiguate goals from existing demonstrations is very compelling and could lead to learning improvements in a variety of ways (signal boost, sample efficiency, data quality)\n* Overall the approach presented in the paper is sound and the paper is clearly written for the most part although there may have been a bit more in the way of architectural detail.\n* The authors provide useful context for their work in section 2 describing past relevant work in pedagogical demos, pragmatic inference and demonstrations in goal conditioned tasks.\n* The training phases are detailed well with accompanying algorithms.\n* The authors show that in the context of naive/simple v. pedagogical/pragmatic teacher/learner setups the pedagogical+pragmatic variant has a clear advantage in the evaluated tasks.\n\n\n**Weaknesses**:\n\n* In the methods section some more explanation on how new goals are discovered could be helpful. I'm a bit foggy on how the goals are actually selected.\n* In section 4.1: A bit more explanation around the distinction of literal v pragmatic learners and naive v pedagogic teachers would be helpful. Some basic details in the main paper would be helpful. Same goes for the architectural details of the teacher/learner networks. The training and interaction is explained however there are no clear architecture diagrams and not much explanation in the main paper.\n* Results in the main paper are limited to a single task and while they show better sample efficiency in this case it is hard to understand the benefit realized more generally. Further the 95% threshold for the 2x increase claimed seems somewhat hand picked (e.g. why not 80% or 90%? These would reduce the sample efficiency significantly). \n* From the results it seems clear that Pedagogical/Pragmatic Teacher/Learner improve over Naive/Simple variants however it is less clear how well these improve more generally over other approaches. Section 5.5 seems to address this over a limited set of baselines including Behavioural Cloning and SQIL but these don't seem to be particularly strong baselines (a couple of them don't even reach a substantial number of goals in the demos). I believe more rigorous comparisons to baselines would be needed here in order to really justify the claims of this work.\n * In figure 4 there is a notable dip over the learner without demonstrations at roughly 6e5 learner steps. It would be useful to explain what this might be, especially since it seems to be present for all seeds and only for this condition. It would be helpful to understand these gains in a broader context to understand the full benefit of the work here. It also may have been useful to try with fewer than 100 demonstrations as performance seems to be mostly unaffected when reducing the number of demos from 1000.\n* There are no results for DTS results in the main paper, including these could be useful.\n\nOverall I think that this is a promising are of work but some more detail would help to explain the approach and the experiments/results would benefit from being more comprehensive to satisfy the claims.\n\n\n[2022-07-27] EDIT: I've added in the missing bit the authors referenced in their comment (I can't reply Officially yet, so adding here). Apologies for that and thanks for flagging. \n* How can we ensure that all goals are mastered/solved/achieved by the teacher over the dataset? What happens if we are only able to master some fraction?\n* In alg. 1 are flags provided for \"learner is pragmatic\" and \"teacher is pedagogical\"?\n* What do you think causes the accuracy dip for the learner curve with no demonstrations in figure 1?\n* What computational overhead does the pre-training occur? How does this compare to the phase involving learning from the demonstrations?\n Limitations weren't explicitly addressed in the main paper to my knowledge.", " This submission proposes a novel method to learn from demonstration. Inspired by human learning, the authors propose a Bayesian Goal Inference framework to avoid ambiguity in multi-goal learning. And use such a framework to benefit the learner's learning under the demonstrator's help. I appreciate the novelty, and great effort put into the paper. Especially the illustration video in the supplementary material--- which helped a lot in understanding the work. \n\nHowever, there are several concerns I hope can be addressed according to the current stage of the presentation. \n1. motivation/ problem formalism: the authors failed to present the task in a clear enough way: e.g., why does the learner need to infer the goal when the goal is known?\n2. missing baselines/ lack of empirical results.\n3. lack of theoretical guarantee\n Major concerns:\n\n1. How is the research motivated, why does the pedagogical demonstration and inference of goals important? In GCRL, demonstrations should contain the true goal state, then what is the purpose of inferring multiple potential goals? Real-world applications can be helpful in demonstrating the importance of the studied problem.\n\n2. with those demonstration data, Hindsight Experience Replay methods can be applied to generate more successful goal-reaching experiences. I believe a comparison to HER can be helpful in evaluating the proposed method's performance. (e.g., in Table 3)\n\n3. As for the precise GCRL algorithm the authors used, it was mentioned in line 104 by referring to [11], however, this [11] is a survey paper. More description of the learning algorithm itself should be at least referred to in this section.\n\n4. during the learning of GCRL, the goal is known, then what is the purpose of inferring the goal?\n\n5. the proposed method does not have a theoretical guarantee but is mostly of a shape as an empirical study.\n\nminor:\n\n1. in GCRL (practically, the robotics control tasks, e.g., [36]), the state space and goal space (with a mapping between state and goal space always assumed to be known [36]) are continuous, how will the proposed method tackle such situations where the goal set has an infinite cardinality?\n\n2. Section 4.1 introduces several reward-shaping designs, is the proposed method sensitive to those choices? detailed ablation studies can be helpful to verify the proposed method.\n\n3. 5 seeds are not enough for evaluating RL algorithms.\n\n4. some presentations are ambiguous and need proofreading. e.g., line 100 'where learners are both...', I reckon there is only one learner?\n\nI may misunderstand some parts of the work, and I'm happy to raise my score if the above concerns can be appropriately addressed. 1. clear motivation of the research\n2. solid guarantee of the proposed method\n3. more empirical studies (#seeds and environments)", " This work proposes utilizing teacher-learner settings in Goal-conditioned Reinforcement Learning with concepts of pedagogy and pragmatics from other disciplines. In particular, the paper introduces the process of Bayesian Goal Inference (BGI), which is a reward-shaping method that encourages policies to learn behaviors that allow an observer to infer the goal given the trajectory. The paper uses BGI into two settings: to augment a teacher policy’s behaviors to have more disambiguation (“pedagogical teacher”), and to augment a learner to filter our teacher demonstrations and to predict it’s own goals (“pragmatic learner”). The paper applies this method to a toy setting and then a simulated robotic block-pushing setting, and show that pedagogy and pragmatics improve goal reaching and goal inference for both the teacher and the learner. Strengths:\n- The method is quite simple and easily usable with any GCRL + LfD method.\n- Since the method implementation in this paper did depend on specific underlying decisions (ie. teacher-learner setup, BGI, SQIL), the ablations were useful to showing the method still works in other settings\n- The method increases goal-reaching accuracy in addition to goal-inference accuracy\n\nWeaknesses:\n- The method is only tested on a fairly simple block-pushing environment. In more challenging environments (high-dimensional action and observation spaces, more ambiguity in the goal space), it is unclear how noisier BGI predictions will affect the method as it is used for reward-shaping\n- Much of the benefit of the method arises from the learner’s utilization of BGI. While 5.4 studies a teacher-less setting, it’s unclear why it’s much more important to have a pragmatic learner than a pedagogical teacher. See questions below.\n - A lot of benefit results from going from literal learner => pragmatic learner, ie. the BGI pedagogical loss for reward shaping. Perhaps the fairest comparison for experiment 4) would be to include cumulative environment interactions (including how many the teacher used). Thus, it can be possible to tell whether the driving benefit results from on-policy updates to encourage the experience-collecting policy (whether it is teacher or learner) to have more disambiguated trajectories. Can you try an experiment where you give the learner (no demonstrations) as many on-policy updates w/ BGI as the cumulative teacher+learner on-policy update budget?\n- I could not find information on how much ambiguity results from the environments, as measured by the Ambiguity Score. How percentage of trajectories in the Naive/Literal methods (without BGI) result in ambiguous policies? How does this scale to more complex tasks in higher-dimensional spaces such as learning from vision, where the BGI measure might be much noisier?\n- How expensive is the BGI prediction, and how does it scale to continuous action spaces or continuous goal spaces? Given that the method relies on the learner BGI heavily, how does the method fare when goal inference becomes more challenging or a worse method is used? Some possible experiments might be to use GPNN instead of BGI in the main experiments, or an ablation increasing the number of goals.\n It would be interesting to see how well the method scales to settings where ambiguity is much harder to quantify. In the paper, the authors illustrate the challenges of ambiguity with a three-block toy problem, but in the real world goal inference is often challenging even for experts. In such settings, goal inference auxiliary tasks (like BGI) may not help performance, and may even hurt occasionally. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "ke7_CDav0mk", "iHKw0OQnXw-", "wKwpAbWYvZi", "k4gkiaGMgOU", "3T_UdhN98z-", "hhmnN3lPUvB", "gMyGXvzyPsK", "FVlDTyeYNcK", "AxZmNrudgcT", "BoiFqgiU2lK", "FF36DIxsfhz", "Nx17r22J8Q", "4UoW-llrhfJ", "hhmnN3lPUvB", "kvngpOfSmG", "gMyGXvzyPsK", "PEffkr4T8AI", "Bmah8rnViHc", "nips_2022_sGugMYr3Hdy", "nips_2022_sGugMYr3Hdy", "nips_2022_sGugMYr3Hdy", "nips_2022_sGugMYr3Hdy", "nips_2022_sGugMYr3Hdy" ]
nips_2022_bntkx18xEb4
HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes
Learning to generate diverse scene-aware and goal-oriented human motions in 3D scenes remains challenging due to the mediocre characters of the existing datasets on Human-Scene Interaction (HSI); they only have limited scale/quality and lack semantics. To fill in the gap, we propose a large-scale and semantic-rich synthetic HSI dataset, denoted as HUMANISE, by aligning the captured human motion sequences with various 3D indoor scenes. We automatically annotate the aligned motions with language descriptions that depict the action and the individual interacting objects; e.g., sit on the armchair near the desk. HUMANIZE thus enables a new generation task, language-conditioned human motion generation in 3D scenes. The proposed task is challenging as it requires joint modeling of the 3D scene, human motion, and natural language. To tackle this task, we present a novel scene-and-language conditioned generative model that can produce 3D human motions of the desirable action interacting with the specified objects. Our experiments demonstrate that our model generates diverse and semantically consistent human motions in 3D scenes.
Accept
Paper was reviewed by four reviewers and received: 1 x Borderline Accept, 1 x Borderline Reject, 1 x Weak Accept and 1 x Accept. The general sentiment of reviewers was positive. Main identified concerns were with lack of diversity in the dataset and potential realism issues arising from construction of the dataset (placing pre-recorded motions into different scenes). Some of these concerns have been somewhat alleviated through the rebuttal. That said, [bBY2] remained concerned with the realism of interactions. AC agrees with [vzPA] that collecting "natural" real-world data for this problem would be difficult and laborious and that the proposed dataset can serve as a good bridge towards this ultimate goal and could be a stepping stone for future research. Therefore the decision is to Accept the paper.
train
[ "o-VTML9ulBm", "UfG3YZ7oOU", "-Ivqe8Yp-Gy", "obIAjWek_o", "EIiFmsR4Pl1", "5LW41n0GRCI", "PeV9W5sD2tL", "2z_wQqdzgj6", "EJ1X-0V9OGE", "thbYAByoppX", "MyqoaohH2vh", "o7wgtUy-Ohm", "DrGli7HJqWO", "w58kXUdqz2t", "NjfbqFkRMft", "7oPleudlLtP", "rDyXig_RTv", "cudd8ExUfO4" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your time and constructive comments. We will integrate the feedback into the revision and further improve the quality and clarity of the paper. If we have resolved all your concerns, we kindly ask you to consider raising the rating. We believe our work would promote future research in the community!", " Thanks the authors for the responses. I have no further questions.", " Thanks for further explaining your concerns over the auxiliary tasks and how “language-conditioned motion generation in 3D scene” can be decomposed into sub-tasks. Again, we agree that this complex task can be decomposed into language grounding, action determination, and motion generation, and we are not claiming “end-to-end” is a better approach. In this paper, we mainly intend to provide a feasible baseline for the introduced task, and we hope more efforts can be devoted to exploring this task in the future.\n\nAs an initial attempt to explore the difference between an end-to-end approach and a cascaded approach for our task, we conduct experiments to compare with baselines that directly use action or target object as conditions. More specifically, We modify our CVAE model by replacing the condition with the concatenation of the global scene feature, the target object center, and the action category. We use the point cloud feature extracted from PointNet++ as the global scene feature. We adopt the one-hot embedding to represent the GT action categories. For the target object center, we use a 3D grounding model pre-trained on ScanNet, _i.e._, ScanRefer [1], to estimate the target object center. We denote this baseline as $\\mathrm{GT_{action}}$. We also test an ablative baseline that directly uses the ground truth position instead of the predicted target object position. We denote this baseline as $\\mathrm{GT_{action+target}}$. The quantitative results are as follows.\n\nModel | $\\mathrm{transl.}\\downarrow$ | $\\mathrm{orient.}\\downarrow$ | $\\mathrm{pose}\\downarrow$ | $\\mathrm{MPJPE}\\downarrow$ | $\\mathrm{MPVPE}\\downarrow$ | $\\mathrm{goal dist.}\\downarrow$ | $\\mathrm{APD}\\uparrow$\n-|-|-|-|-|-|-|-\n$\\mathrm{GT_{action}}$ | 8.76 | 5.83 | 5.44 | 209.06 | 203.33 | 1.305 | 13.39\n$\\mathrm{GT_{action+target}}$ | 8.06 | 5.85 | 5.32 | 193.89 | 188.78 | 0.246 | 8.83\nOurs | 8.61 | 6.43 | 5.71 | 210.96 | 205.42 | 1.081 | 10.54\n\nFrom the table, we can see that the baseline that directly uses GT action-conditioning reaches approximately the same performance as our model, while utilizing the GT action and the GT target position can significantly improve the motion generation metrics. We hypothesize this is because the action can be easily parsed from the instruction and 3D object grounding is significantly more challenging than other subtasks, which affects the performance most. The results also justify the decomposition could achieve similar performance in current complexity, we will clarify this finding in our revised version, thank you!\n\n[1] Dave Zhenyu Chen, Angel X Chang, and Matthias Nießner. Scanrefer: 3d object localization in rgb-d scans using natural language. In _European Conference on Computer Vision (ECCV)_, 2020.\n", " Thank you for the responses. I have read all the reviews and responses. I am satisfied with the responses of my questions. \n\nOverall, I think this is a very ambitious direction, and the authors have done a good job of providing the community with a valuable dataset and starting point to explore further. Hence, I am raising my score by 1", " Dear reviewer:\n\nThanks again for your constructive suggestions, which have helped us improve the quality and clarity of the paper!\n\nSince the discussion phase is about to end, we have not heard any post-rebuttal response yet.\n\nPlease don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. Thanks!", " Dear reviewer:\n\nThanks again for your constructive suggestions, which have helped us improve the quality and clarity of the paper!\n\nSince the discussion phase is about to end, we have not heard any post-rebuttal response yet.\n\nPlease don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. Thanks!", " Dear reviewer:\n\nThanks again for your constructive suggestions, which have helped us improve the quality and clarity of the paper!\n\nSince the discussion phase is about to end, we have not heard any post-rebuttal response yet.\n\nPlease don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. Thanks!", " Thanks for the detailed response! \n\nMy remaining question still centers on the auxiliary task and on a bigger scale, how general the proposed pipeline can be. I feel like the auxiliary task and the necessity of the auxiliary task prove my point that the task is degenerating action classification and goal-reaching. The action class and goal location are now explicitly involved in the loss function and guide the model to become classification and goal-reaching tasks. I feel like a key-world-based action matcher and target object extractor (based on a POS tag or something similar) might be enough to be used as tokens to be fed into the classifier. A BERT embedding does not seem necessary. This way, the task degenerates into an action-conditioned goal-reaching task, rather than \"language conditioned motion generation\". How well would a baseline that directly uses action-conditioning work? \n\nAs mentioned in the response to reviewer \"yyeu\": \"However, it does not necessarily mean that solving the three sub-tasks independently can solve the more complex task since these sub-tasks are not mutually independent and the error will accumulate across sub-tasks.\" I feel like there needs to be more discussion and ablation around this point. Training end-to-end does not always lead to better performance, and claiming \"end-to-end\" is a better approach in this problem seems overreaching. Training the system end-to-end does not automatically lead to better performance. \n\n", " [1] Mohamed Hassan, Vasileios Choutas, Dimitrios Tzionas, and Michael J Black. Resolving 3d human pose ambiguities with 3d scene constraints. In _International Conference on Computer Vision (ICCV)_, 2019. \n[2] Zhe Cao, Hang Gao, Karttikeya Mangalam, Qi-Zhi Cai, Minh Vo, and Jitendra Malik. Long-term human motion prediction with scene context. In _European Conference on Computer Vision (ECCV)_, 2020. \n[3] Panos Achlioptas, Ahmed Abdelreheem, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas. Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes. In _European Conference on Computer Vision (ECCV)_, 2020. \n[4] Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang, Yi Zhou, and Michael J Black. Stochastic scene-aware motion prediction. In _International Conference on Computer Vision (ICCV)_, 2021. \n[5] Mathis Petrovich, Michael J Black, and Gül Varol. Action-conditioned 3d human motion synthesis with transformer vae. In _International Conference on Computer Vision (ICCV)_, 2021. \n[6] Jiashun Wang, Huazhe Xu, Jingwei Xu, Sifei Liu, and Xiaolong Wang. Synthesizing long-term 3d human motion and interaction in 3d scenes. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2021. \n[7] Abhinanda R Punnakkal, Arjun Chandrasekaran, Nikos Athanasiou, Alejandra Quiros-Ramirez, and Michael J Black. Babel: Bodies, action and behavior with english labels. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2021. \n[8] Yan Zhang, Mohamed Hassan, Heiko Neumann, Michael J Black, and Siyu Tang. Generating 3d people in scenes without people. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2020b. \n[9] Mohamed Hassan, Partha Ghosh, Joachim Tesch, Dimitrios Tzionas, and Michael J Black. Populating 3d scenes by learning human-scene interaction. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2021b. \n[10] Kaifeng Zhao, Shaofei Wang, Yan Zhang, Thabo Beeler and Siyu Tang. Compositional Human-Scene Interaction Synthesis with Semantic Control. In _European Conference on Computer Vision (ECCV)_, 2022. \n[11] Naureen Mahmood, Nima Ghorbani, Nikolaus F Troje, Gerard Pons-Moll, and Michael J Black. Amass: Archive of motion capture as surface shapes. In _International Conference on Computer Vision (ICCV)_, 2019. \n[12] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al. The replica dataset: A digital replica of indoor spaces. _arXiv preprint arXiv:1906.05797_, 2019.", " Thank you very much for your valuable feedback! We sincerely hope that our response can address your concerns. \n\n### 1. Concerns about language diversity and generalization to out-of-domain language.\n\nPlease refer to the general responses.\n\n### 2. The trade-off between data curation efficiency and low-level nuances.\n\nThis is a very good observation about the ``low-level nuances'' in HSI research. The detailed hand-object interaction is not considered in our current data generation pipeline. For one thing, current HSI research emphasizes the high-level semantic and physical plausibility, also our focus in this paper. For another, how to deal with the detailed part-level interaction between humans and objects is still an open and ongoing research topic. Both Neural-State Machine [2] and SAMP [3] assume limited action types and familiar object geometry to predict the contact point. Even though low-level constraints are not extensively optimized in our pipeline, the quantitative human study (Tab. 1 in Supp. Mat.) demonstrates HUMANISE still has higher quality in terms of collision, smoothness, HSI, and overall quality compared to PROX. This shows that our dataset is a good extension of the existing HSI datasets and achieves a reasonable and practical balance between data curation efficiency and low-level nuances.\n\n### 3. Clarification of the train-test split. Are there semantically different sentences in the training and test set?\n\nWe split HUMANISE according to the original scene IDs in ScanNet, i.e., HSIs with scene IDs less than 600 are in the training set, and HSIs with scene IDs greater than 600 are in the test set. There is no clear split standard in terms of the language description; most sentences in the test set will have semantically similar ones in the training set. However, this task is still challenging as it requires more than language understanding, i.e., the accurate grounding in 3D unseen scenes and scene-aware plausible motion generation. This is similar to Sr3D [1], where the 3D grounding task is challenging despite the template-based simple descriptions.\n\n### 4. Discussion about decomposing the task into sub-tasks and the technical novelty of this paper.\n\nConceptually, solving the language-conditioned human motion generation task in 3D scenes can be decomposed into three sub-tasks: language grounding, action determination, and motion generation. However, it does not necessarily mean that solving the three sub-tasks independently can solve the more complex task since these sub-tasks are not mutually independent and the error will accumulate across sub-tasks. A similar case is the one-stage/two-stage visual language grounding. One-stage methods generally outperform two-stage methods, in which object detection and language grounding are performed separately. \n\nOur technical novelty mainly lies in (1) the first step to handle the proposed new task with an end-to-end baseline and (2) the incorporated auxiliary loss functions inspired by the aforementioned ideas of how to address the joint task. Experiments show that the proposed method handles the task well and the auxiliary losses indeed improve the performance.\n\n### 5. How to sample the target object based on the action label?\n\nWe predefined the potential object categories for each action type. Given an action label, the interacting object is sampled randomly from all the objects in the scene that fall into those categories. We will clarify this in our revision.\n\n### 6. The quantitative results grouping in Tab. 2.\n\nWe group the quantitative results in Tab. 2 so that the result comparison will contribute to the discussion in Sec. 5.3 in the main paper. We will clarify this in the revision.\n\n### Bibliography\n\n[1] Panos Achlioptas, Ahmed Abdelreheem, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas. Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes. In _European Conference on Computer Vision (ECCV)_, 2020. \n[2] Sebastian Starke, He Zhang, Taku Komura, and Jun Saito. Neural state machine for character-scene interactions. _ACM Transactions on Graphics (TOG)_, 38(6):209–1, 2019. \n[3] Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang, Yi Zhou, and Michael J Black. Stochastic scene-aware motion prediction. In _International Conference on Computer Vision (ICCV)_, 2021.", " Thank you very much for your valuable feedback! We sincerely hope that our response can address your concerns.\n\n### 1. Action category and motion lengths.\n\nPlease refer to the general responses for clarification.\n\n### 2. Clarification of CVAE-based generation framework.\n\nThe CVAE framework is widely used in conditional human motion/pose generation tasks [1,2,3,4,5]. Our experimental and ablation results demonstrate that the proposed framework, the module design, and the auxiliary loss functions are effective for this task. As also pointed out by reviewer 1Evw, we think our proposed framework acts as a decent baseline for future works on this task. We hope more sophisticated conditional generative models can build upon our method in the future.\n\n### 3. Model size and GFLOPs.\n\nThe total model size is about 129.3M, including the BERT module which consumes 109.5M. Due to customized modules, we cannot accurately evaluate the GFLOPs of our model. Alternatively, we report the training time, which partially reflects the computational complexity of the model. It takes about 16 hours to train our model on HUMANISE with a single V100 GPU and a batch size of 24.\n\n### 4. Are the motions in the dataset or generated by the model appropriate to the environment? Simply applying the same motion to different environments may result in unreasonable actions.\n\nOur data synthesis pipeline pinpoints two critical factors in realistic HSI: semantic consistency and physics plausibility, which are also the focuses and metrics of previous work on human motion generation [1,2,3,5]. We argue that the motions that satisfy these two factors are reasonable and appropriate interactions in 3D scenes. Therefore, human poses for `sit on couch` and `sit on toilet` could be similar. Collecting all possible interactions for certain actions, for example, all human postures of `sit on toilet`, requires significantly larger effort and is not our primary focus.\n\nFurthermore, human study (Tab. 1 in Supp. Mat.) also demonstrates HUMANISE has higher quality in terms of collision, smoothness, HSI, and is more appropriate compared to PROX.\n\n### Bibliography\n\n[1] Jiashun Wang, Huazhe Xu, Jingwei Xu, Sifei Liu, and Xiaolong Wang. Synthesizing long-term 3d human motion and interaction in 3d scenes. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2021. \n[2] Yan Zhang, Mohamed Hassan, Heiko Neumann, Michael J Black, and Siyu Tang. Generating 3d people in scenes without people. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2020. \n[3] Mathis Petrovich, Michael J Black, and Gül Varol. Action-conditioned 3d human motion synthesis with transformer vae. In _International Conference on Computer Vision (ICCV)_, 2021. \n[4] Yan Zhang, Michael J Black, and Siyu Tang. We are more than our joints: Predicting how 3d bodies move. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2021. \n[5] Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang, Yi Zhou, and Michael J Black. Stochastic scene-aware motion prediction. In _International Conference on Computer Vision (ICCV)_, 2021.", " Thank you very much for your valuable feedback! We sincerely hope that our response can address your concerns.\n\n### 1. Concerns over dataset generation and motion diversity.\n\nPlease refer to the general responses for clarification. \n\n**Detailed motion statistics:** For the motion diversity, 1268 different motions from AMASS are selected to synthesize 4892 motion sequences in 512 3D scenes. Among all the synthesized motion sequences 4892(1268), the number of the synthesized motion sequences and the number of selected motions from AMASS for each action are: 2488(715) for `walk`, 1299(355) for `sit`, 936(165) for `stand up`, and 169(33) for `lie`. Please refer to Sec. 3.3 in the main paper and Sec. A in the Supp. Mat. for more dataset statistics. \n\nTo measure the diversity of the generated motion, we report the Average Pairwise Distance (APD) [3,4,5], i.e., the average L2 distance between all pairs of motion samples within scenarios. APD is computed as $\\frac{1}{K(K-1)}\\sum_{i=1}^{K}\\sum_{j\\neq i}^{K}||\\mathbf{x_i} - \\mathbf{x_j}||$, where $K$ is sample size and $\\mathbf{x_i}$ is the sampled marker-based representation[5] of pose sequence. Here we choose $K=20$; the unit is meter.\n\n|Model|sit|stand up|lie down|walk|w/o self-att.|PointNet Enc.|all actions|w/o aux. loss|\n|:-|:-|:-|:-|:-|:-|:-|:-|:-|\n|APD|9.92|7.49|7.71|6.76|10.36|13.16|10.54|10.17|\n\nFrom the table, we can see the generated motions are quite diverse. It also can be seen in Fig. 4 in the main paper and Fig. 5 in the Supp. Mat. that our model is able to generate multiple meaningful motions for each scenario.\n\n### 2. Whether the auxiliary tasks degenerate the proposed task into an action classification and goal-reaching task.\n\nThe auxiliary tasks we propose in the framework act as the inductive bias to help the model better understand the language and perform 3D grounding, as can be seen from the ablation study. Since our method is an end-to-end model without stages, the task does not degenerate into sub-tasks. In contrast, prior work like [1,3] actually decouples the task into sub-tasks of action generation and goal-reaching.\n\n### 3. More qualitative results to evaluate the quality of the dataset and the generative methods.\n\nThe Supp. Mat. and the demo video provides some data samples and qualitative results. We also provide additional examples in video form from the [anonymous website](https://dsdbhj.github.io/humanise_material/index.html#dataset). We will further design an explorer upon the release of the dataset.\n\n### 4. Quantitative comparison with PROX is not entirely fair.\n\nAlthough PROX was initially proposed for pose estimation, it is now the most popular dataset for HSI research and motion generation. The gap between the estimated human poses in PROX and the MoCap sequences is exactly the motivation of our work, i.e., proposing a large-scale, high-quality synthetic dataset to facilitate the research on scene-conditioned human motion generation. The comparison between our dataset and PROX justifies our motivation.\n\n### 5. Implausible motion in the demo video at timestamp 6:48.\n\nThis motion is a failure case generated by the lie-down action model. The worse results in the `lie down` action might be due to the unbalanced data distribution (the lie-down subset is only $10\\%$ of the sit subset).\n\n### Bibliography\n\n[1] Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang, Yi Zhou, and Michael J Black. Stochastic scene-aware motion prediction. In _International Conference on Computer Vision (ICCV)_, 2021. \n[2] Mohamed Hassan, Vasileios Choutas, Dimitrios Tzionas, and Michael J Black. Resolving 3d human pose ambiguities with 3d scene constraints. In _International Conference on Computer Vision (ICCV)_, 2019. \n[3] Kaifeng Zhao, Shaofei Wang, Yan Zhang, Thabo Beeler and Siyu Tang. Compositional Human-Scene Interaction Synthesis with Semantic Control. In _European Conference on Computer Vision (ECCV)_, 2022. \n[4] Ye Yuan and Kris Kitani. Dlow: Diversifying latent flows for diverse human motion prediction. In _European Conference on Computer Vision (ECCV)_, 2020. \n[5] Yan Zhang, Michael J Black, and Siyu Tang. We are more than our joints: Predicting how 3d bodies move. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2021.", " Thank you very much for your valuable feedback! We sincerely hope that our response can address your concerns.\n\n### 1. Concerns over dataset generation and diversity.\n\nPlease refer to the general responses about the concern of the action category. Here we provide additional quantitative analysis and comparison of HUMANISE as an extension of Tab. 1 in the main paper. 1268 different motions from AMASS are selected to synthesize 4892 motion sequences in 512 3D scenes. From the per-frame interaction annotation on PROX from [4], $80.0\\%$ of the frames belong to the `walk`, `stand`, `sit`, `lie`, where `lie` takes about $6\\%$. These results verify that HUMANISE is similar to the existing HSI dataset in terms of motion diversity but further possesses a significant advantage in scene variety.\n\n### 2. Difficulty or quality of the dataset. Generalization to novel descriptions and scenes for modeled trained on HUMANISE.\n\nPlease refer to the general responses to the questions regarding language descriptions and motion lengths. Here we further emphasize the challenge and difficulty of the task, and also the generalization ability of models fostered by our data. As also pointed out by other reviewers (1Evw, yyeu), the challenges of language-conditioned human motion generation in 3D scenes lie in the combination of motion realism, physical plausibility, and multi-modal 3D grounding. We argue our proposed task is much more complicated than previous works [1,2,3], which have already shown to be challenging as individual tasks.\n\nFor the generalization capability, our proposed dataset provides the chance to evaluate many more unseen scenes (85 during test time) compared to [2,4] (4 unseen scenes in PROX). Experimental results show that our model trained on HUMANISE can generalize to unseen 3D scenes. Our model can also generalize to motion generation with different durations, as seen from the qualitative results shown in Fig. 6 in Supp. Mat. As mentioned in the general responses, we also test our model's generalization ability on natural descriptions from human annotators in (1) the test split of HUMANISE, and (2) the new dataset Replica [5]. As shown in the [anonymous website](https://dsdbhj.github.io/humanise_material/index.html#generalize), our model can generate meaningful motions under human-annotated descriptions, even in new scenes from other datasets.\n\n### 3. Viability of generating rich synthetic HSI datasets by the proposed pipeline.\n\nAs discussed in the general responses, capturing motions in real scenes is indeed a challenging and labor-consuming task. Thus, producing data with a synthetic pipeline is a viable way to address the limitations of the existing datasets (i.e., quality, scale, scene diversity) and facilitate HSI research. Our pipeline pinpoints two critical factors in realistic HSI: semantic consistency and physics plausibility. Human study (Tab. 1 in Supp. Mat.) demonstrates HUMANISE has higher quality in terms of collision, smoothness, HSI, and overall quality compared to PROX.\n\n### Bibliography\n\n[1] Zhe Cao, Hang Gao, Karttikeya Mangalam, Qi-Zhi Cai, Minh Vo, and Jitendra Malik. Long-term human motion prediction with scene context. In _European Conference on Computer Vision (ECCV)_, 2020. \n[2] Jiashun Wang, Huazhe Xu, Jingwei Xu, Sifei Liu, and Xiaolong Wang. Synthesizing long-term 3d human motion and interaction in 3d scenes. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2021. \n[3] Mohamed Hassan, Vasileios Choutas, Dimitrios Tzionas, and Michael J Black. Resolving 3d human pose ambiguities with 3d scene constraints. In _International Conference on Computer Vision (ICCV)_, 2019. \n[4] Kaifeng Zhao, Shaofei Wang, Yan Zhang, Thabo Beeler and Siyu Tang. Compositional Human-Scene Interaction Synthesis with Semantic Control. In _European Conference on Computer Vision (ECCV)_, 2022. \n[5] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al. The replica dataset: A digital replica of indoor spaces. _arXiv preprint arXiv:1906.05797_, 2019.", " We thank all the reviewers for their time and valuable comments. Below, we first clarify common concerns and summarize revisions.\n\n## 1. Clarification of action categories\n\n### Design principle\n\nHUMANISE aims to address the limitations of the previous HSI datasets regarding motion quality, scene variety, and semantics. As action diversity is not the topmost focus, we choose the action types by following common practices from previous works [4,6,8,9,10] on human motion generation in 3D scenes, which only deal with `walk`, `sit`, `stand up`, and `lie`. Note that the introduced four actions are also the most common daily indoor activities, dominant in previous HSI datasets, such as PROX [1]. The action type distribution is also long-tailed in AMASS [11] and BABEL [7], in which the open and close actions mentioned by reviewers only have 19 and 6 segments, respectively.\n\n### Generate other action types\n\nWe have already demonstrated that our dataset generation pipeline could be extended to other actions. In fact, we provided examples of two additional actions, `turn` and `jump`, in Fig.3 in the Supp. Mat **in the initial submission**. Here, we further demonstrate the pipeline's capability with both interactive actions (such as `open`, `place`, `knock`) and non-interactive actions (such as `dance`). Please visit the [anonymous website](https://dsdbhj.github.io/humanise_material/index.html#dataset) for visualizations. \n\n### More fine-grained interactions\n\nGiven existing methods, generating high-quality and large-scale fine-grained HSIs such as hand-object interactions in dynamic 3D scenes is much more challenging. How to collect such a dataset is still an open problem. It could be an interesting direction for future work.\n\n## 2. Clarification of motion duration\n\n### Design principle\n\nWe follow previous works to generate human motion ranging from 30 to 120 frames (30 FPS). For example, the most related work [6] generates human motions that last 2 seconds in 3D scenes, ACTOR [5] generates human motions that are less than 120 frames, and [2] considers predicting the human motion in the future 3 seconds with given histories.\n\n### Synthesize longer motions\n\nOn the one hand, our dataset generation pipeline can synthesize longer motions. We provide examples on the [anonymous website](https://dsdbhj.github.io/humanise_material/index.html#longer), including longer `walk` and `dance` that last more than 8 seconds. On the other hand, longer human motions can be seen as the composition of atomic action sequences. With our work, long-duration human motion generation and composition are promising in the future.\n\n## 3. Clarification of language descriptions\n\nThe language descriptions we provide in HUMANISE act as the first step to providing sufficient information about the action and scene semantics for goal-oriented human motion generation. Although the descriptions are generated using templates, they are diverse in terms of the referential utterances [3] and the involved spatial relations; see main paper Sec. 3.2. Note that the 3D object grounding accuracy on the Sr3D dataset [3] is less than $40\\%$, meaning that such a templated description is still challenging for models to locate the target objects and generate motions. We further showcase the generalization ability of our model on natural descriptions from human annotators in (1) the test split of HUMANISE, and (2) the new dataset Replica [12]. Some sample results are shown in the [anonymous website](https://dsdbhj.github.io/humanise_material/index.html#generalize).\nIn our future work, we will further explore the generalization ability to use natural language description and other 3D scene datasets.\n\n## 4. Steps toward natural Human-Scene Interaction (HSI) understanding and generation\n\nTo solve the challenges in HSI understanding and generate motions precisely with instructions, ideally, we hope to collect a large-scale dataset where long and diverse motions interact with dynamic 3D scenes and are annotated with natural language descriptions. However, this is still challenging with current 3D capturing techniques and high costs. Our work sufficiently bridges existing and future works by proposing HUMANISE with large-scale language-conditioned motions and enabling a new language-conditioned generation task. We will release our dataset, extendable dataset generation pipeline, and models to facilitate the HSI research.\n\n## Updates to the initial submission\n\nWe have revised the typos in the main paper and the Supp. Mat. as the reviewers suggest, shown in red. \n\n**Main paper:**\n\n- Fixing typos in Table 1, L2, L87, L114, \n- L124: Adding how to sample the interacting object by the action label\n- L248: Adding the scene number in the train/test split.\n\n**Supp. Mat:**\n\n- L16-L20: Adding motion diversity statistics when generating HUMANISE.", " The paper focuses on human-scene interaction, and the authors propose a new dataset and a new task of language + scene conditioned synthesis of 3D human motion. Given a 3D scene as RGB point clouds and a language description (“sit on the couch”), the goal is to synthesize realistic and plausible motion in the given scene. \n\nThey introduce the HUMANISE dataset, which contains (4892) AMASS 3D motion sequences aligned with (512) ScanNet indoor scenes. The authors do motion alignment of AMASS sequences with the scenes using collision and contact constraints. The language descriptions for these aligned motion sequences are generated using templates. They propose a reasonable encoder-decoder model to conditionally synthesize human motion sequences and provide clear ablations to back their design. Strengths:\n- The problem of 3D motion generation conditioned on language and scene is an interesting and important one. HUMANISE is the first dataset and approach to tackling this. \n- The qualitative and quantitive results of the proposed model are good. \n- The paper is also clearly written. All the arguments and technical decisions made in the paper are well-backed by reasoning or experiments. \n\nWeakness:\n\nMy primary concern is with the dataset itself and the methodology of creating the dataset. \n\nFirst, most sequences are very short (averaging < 2 sec at 30fps). There are only four action types considered (walk/stand/sit/lie), with “lie” being < 4% of the data. The other action types don’t have rich interactions with the environment (“walk to the table”, “sit on the couch”). Similarly, the language description is not extremely interesting as it mostly composes an action type + object type. The power of language would be in expressing more interesting compositions and interactions. \n\nGiven this, I feel the representation in Table 1 is not giving a complete picture. Yes, HUMANISE has many more scenes and clips, but the variation in diversity should also be stated. \n\nFundamentally these limitations might be due to trying to align a motion to a 3D scene it doesn’t belong. This greatly decreases the possible motions that could be aligned and the possible object and scene interactions that the motion + language can describe (other than walk/sit/stand/jump etc.) Given the dataset, I’m curious to hear the authors' thoughts on how they expect the community to build on it. \n1) Difficulty or quality of the dataset: It seems like the dataset is not particularly challenging regarding both language description and length of motions. Given this, the models trained on HUMANISE may never generalize to novel descriptions & scenes. \n2) How the dataset generation process is a viable solution to generating rich HSI datasets with language descriptions? This seems to be one possible approach, but I’m not sure it’s a promising one. Finding a motion sequence recorded in isolation to have rich interactions in an unknown scene seems unlikely. To solve the task in the right way, one might actually need to capture motion occurring in scenes (which is a very challenging task in itself). \n\nMinor fixes:\n- Table 1 caption: HUAMNISE → HUMANISE\n- Line 114: (i.e., ScanNet [Dai et al., 2017] → (i.e., ScanNet [Dai et al., 2017])\n Authors have discussed limitations and societal impact of their work.\n", " This paper released a synthesized dataset for language-conditioned human motion generation in 3D scenes, which aligns the existing 3D human motion datasets in 3D scene datasets to synthesize human motions in 3d scene. Besides, this paper proposes a baseline model based on conditional VAE for the language-conditioned human motion generation in 3D scene. This topic is promising and meaningful. Strengths:\nA large-scale and semantic-rich synthetic HSI dataset is proposed in this work, which enables a new task: language-conditioned human motion generation in 3D scene.\n\nWeaknesses:\n1) Though this is the large-scale and semantic-rich synthetic HSI dataset, the number of the action categories is limitted. There are only four interacitve indoor actions, i.e sit, stand up, lie down, and walk. The rich human-object interactions like opening a refrigerator, close the door are not included;\n2) The sequence length of each sample in the proposed dataset seems too short to express rich semantic information;\n3) As the motion dataset are aligned, not actually acted, in the 3D scene, some motion in the interaction cannot well presented in the dataset, for example torch a chair before siting on it;\n 1) Can you report some experimental comparisons of your model with other conditional generation models? Why do you use the CVAE model, and what are the effects of other conditional generation models? It is insufficient to judge the advanced property of the CVAE model only by ablation studies.\n2) How about the Params and the GFLOPs of your model?\n3) Do the dataset and the model proposed in this paper adequately consider the body-scene interactions? In other words, are the motions in the dataset or generated by model appropriate to the environment? For example, human sitting on the couch and sitting on the toilet may have different postures. Simply applying the same motion to different environments may result in unreasonable actions.\n Authors are advised to provide more and longer-sequence video demonstrations for the proposed dataset.\nThe body-scene interactions should be considered and refined in the proposed dataset for plausible human motion generation. \n", " This work proposes a new dataset containing paired human motion, scene semantic labels, and motion language annotation. Its main contribution lies in an automatic pipeline for generating human-scene interactions that are both semantically and language labeled. Leveraging a large-scale motion dataset, semantically labeled 3D scenes, rule-based instruction generation, and collision/contact constraints, this dataset contains better quality motion and human-scene interactions than previous video-based datasets (e.g. PROX). Leveraging this dataset, a generative model is proposed to tackle the new task, language and scene-conditioned 3D human motion generation. ## Strength\n\n**Automatic data generation**\n\n- The main strength of this work lies in the proposed dataset and automatic data generation pipeline. The idea of using existing MoCap data from AMASS and scene scans to create paired human scene interaction is intuitive and interesting. Such a pipeline does not rely on any additional data capture equipment and can help reuse existing datasets for HSI-related tasks. The proposed contract and collision constraints can largely alleviate the violation of physical constraints in this automatic data generation framework.\n\n**New Task: scene and language conditioned motion generation** \n\n- While language/action-conditioned human motion generation [1,2], scene-conditioned motion generation [3], and goal-conditioned motion generation [4] has been separately studied to a certain extent, language ***and*** scene-conditioned human motion generation has not been explored as much due to the lack of annotated dataset. All the challenges that separately exist in the above tasks will be more apparent in these combined tasks (motion realism, physical validity, multi-modal nature of human motion). This paper proposed a general baseline for further studying this interesting and impactful task.\n- The proposed VAE network, though similar to prior arts in conditioned human motion generation [1,2] does serve as a decent baseline for future works in this task. The auxiliary task loss is a viable approach to gain better performance.\n\n## Weakness:\n\n**Dataset Creation Methodology:**\n\n- While the created dataset is certainly useful for future HSI research, the limited action types and motion diversity (see in questions) takes away some of the novelty. The proposed framework can be viewed as fitting a scene for existing motion sequences, and largely relies on the action labels for matching the motion sequences. It is more interesting to see how diverse the **generated** motion from learning-based methods, in the hope that learning-based methods and learn from a few interaction samples and actually learn affordance.\n\n**Auxiliary task loss:**\n\n- While the auxiliary task is effective in boosting motion reconstruction performance, it takes away the language conditioning and more or less makes the task degenerates into an action classification and goal-reaching task.\n\n**Qualitative Results:**\n\n- More qualitative evaluation is needed to fully evaluate the quality of the dataset and the generative methods. Since motion is best seen in videos, it is important to include more quantitative results for both the dataset and the generative method for better evaluation. It is easy to pick sequences that fit the current text description, but the overall dataset quality is hard to judge based on a handful of samples.\n\n**Motion Diversity:**\n\n- For a motion generation task, it is important to include evaluation metrics on motion diversity. While the included metrics such as goal distance and action score are important for scene and language conditioning, given the existence of the auxiliary task loss, the proposed method can easily memorize a few motions that match goal and actions.\n\n## Small issues:\n\n- L2: mediocore “characteristics”\n- L87: study HSI related topics\n\n[1] Petrovich, Mathis, Michael J. Black and Gül Varol. “Action-Conditioned 3D Human Motion Synthesis with Transformer VAE.” *2021 IEEE/CVF International Conference on Computer Vision (ICCV)* (2021): 10965-10975.\n\n[2] Ahuja, Chaitanya and Louis-Philippe Morency. “Language2Pose: Natural Language Grounded Pose Forecasting.” *2019 International Conference on 3D Vision (3DV)* (2019): 719-728.\n\n[3] Cao, Zhe, Hang Gao, Karttikeya Mangalam, Qi-Zhi Cai, Minh Vo and Jitendra Malik. “Long-term Human Motion Prediction with Scene Context.” *ArXiv* abs/2007.03672 (2020): n. pag.\n\n[4]Hassan, Mohamed et al. “Stochastic Scene-Aware Motion Prediction.” *2021 IEEE/CVF International Conference on Computer Vision (ICCV)* (2021): 11354-11364. **Dataset Generation and motion diversity** \n\n- What is the motion diversity and number of samples for each action? BABEL provides action labels for the AMASS dataset, but for actions such as walking and sitting up and down, there are more samples than lying on beds. For the 4k motion sequences, are they all motion sequences of different action/motion? What is the data distribution among the actions?\n\n**Qualitative Results**\n\n- In the provided video, at timestamp 6:48, the right bottom motion does not correspond to any meaningful human motion.\n- Quantitative comparison with PROX is not entirely fair and does not really provide additional insight. The motion sequences from PROX are recovered from optimization-based methods and is not for the purpose of human motion generation (more for pose estimation). It is bound to have lesser quality than MoCap sequences from AMASS dataset. Yes, the authors have addressed the limitations adequately. ", " The key motivation behind this work is animating human body motion in affordance with the environment ( in this case, 3d indoor scenes), where these animations are grounded semantically in natural language. A precursor to this challenging task is the curation of a large-scale dataset with aligned 3d scenes, human motion and language. The authors have made contributions in the curation of such a dataset named HUMANISE, and an approach to generated diverse and semantically consistent human motions given the 3d scene and language. ### Strengths\n- Generating animation in affordance with environments is a useful and challenging problem. How a person behaves in the environment can depend completely on (1) high level conditions such as how the objects are arranged in the environment and also on (2) low-level conditions such as the subtle differences between the size of the chair, or the hand placement depending on where a person sits on a big couch. Addition of language to this challenging problem is a welcome addition and simplicity of the data curation is commendable. I hope that the data collection scripts would be released for the community to use.\n- Animation generation in context of scenes and language is complex. To that end, both quantitative and qualitative experiments were performed which make the analysis more solid.\n\n### Weaknesses\n- The descriptions are generated using a fixed template and four actions. That can be quite limiting to the diversity of the possible sentences, especially in out-of-domain language contexts. How does the model behave in those scenarios\n* Consider the example about \"sit on the armchair near the desk\" example used in the Introduction. While the semantics of finding the right chair and sitting down on it are the high level conditions been looked after, low level nuances (discussed in [1]) are ignored here such as the positions of the hands. Are they on the armrests or are they on the table or something else? \n* This brings me to a concern about the construction of the dataset. As existing animations are aligned with 3d scenes, low-level nuances will likely be incorrect. While I understand that this is a trade-off of simple data curation vs low-level accuracy, how much do we lose when we don't care about these low-level nuances?\n* How is the training and test split done? For example, consider \"sit on the coffee table\" in the train set. Is there a semantically similar sentence in the test set such as \"sit on the coffee table near the couch\"? If so, I would be curious to see some analysis of cases where this is not true because they can be a plausible edge-case (or out-of-domain scenario). \n* I would like to put forth one argument (not necessarily a weakness) on which I hope to have a constructive discussion with the authors. Let's assume we only care about the high-level semantics. This reduces the problem of first finding the target object (from the 3d scene and language), followed by figuring out the correct action to perform (from the language) and finally use these two pieces of information to generate a plausible animation (as discussed in the model section of the paper). Finding the target object using natural language and 3d scenes is a relatively well studied problem. Determining the action from the language is relatively straightforward given that the language sentences are template based and there are only 4 actions. And finally target based animation is also a well-studied problem in the graphics community. Where does the technical novelty of this paper lie? Are the authors positioning the paper that combines these three ideas to solve a more complex problem, or is there something I am missing out?\n\n#### Minor Suggestions\n- In figure 4 column 2, where is the TV. Some of the images are not very clear to make out the details.\n- Table 2: the \\hline is probably intended between walk and w/o self-attn.\n\n[1] - Starke, Sebastian, et al. \"Neural state machine for character-scene interactions.\" _ACM Trans. Graph._ 38.6 (2019): 209-1. - L123-124: How are the interesting target objects sampled based on the action label?\n- Rest of the questions are in the previous section The limitations have been discussed at length in the paper, barring one on trade-off between low-level nuances and data curation simplicity (more in the weaknesses section). " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "UfG3YZ7oOU", "7oPleudlLtP", "2z_wQqdzgj6", "thbYAByoppX", "cudd8ExUfO4", "7oPleudlLtP", "NjfbqFkRMft", "o7wgtUy-Ohm", "w58kXUdqz2t", "cudd8ExUfO4", "7oPleudlLtP", "rDyXig_RTv", "NjfbqFkRMft", "nips_2022_bntkx18xEb4", "nips_2022_bntkx18xEb4", "nips_2022_bntkx18xEb4", "nips_2022_bntkx18xEb4", "nips_2022_bntkx18xEb4" ]
nips_2022_2-REuflJDT
Fully Convolutional One-Stage 3D Object Detection on LiDAR Range Images
We present a simple yet effective fully convolutional one-stage 3D object detector for LiDAR point clouds of autonomous driving scenes, termed FCOS-LiDAR. Unlike the dominant methods that use the bird-eye view (BEV), our proposed detector detects objects from the range view (RV, a.k.a. range image) of the LiDAR points. Due to the range view's compactness and compatibility with the LiDAR sensors' sampling process on self-driving cars, the range view-based object detector can be realized by solely exploiting the vanilla 2D convolutions, departing from the BEV-based methods which often involve complicated voxelization operations and sparse convolutions. For the first time, we show that an RV-based 3D detector with standard 2D convolutions alone can achieve comparable performance to state-of-the-art BEV-based detectors while being significantly faster and simpler. More importantly, almost all previous range view-based detectors only focus on single-frame point clouds since it is challenging to fuse multi-frame point clouds into a single range view. In this work, we tackle this challenging issue with a novel range view projection mechanism, and for the first time demonstrate the benefits of fusing multi-frame point clouds for a range-view based detector. Extensive experiments on nuScenes show the superiority of our proposed method and we believe that our work can be strong evidence that an RV-based 3D detector can compare favourably with the current mainstream BEV-based detectors. Code will be made publicly available.
Accept
After the rebuttal and discussion two reviewers recommend acceptance, one rejection. In their rebuttal, the authors were able to convincingly resolve all issues raised. Thus the AC sees no reason to reject this paper.
train
[ "Uf88lsECFNI", "0ehnKlMjd9i", "kVGGwqKBjBy", "ThQDn9yjW-S", "ucF6AnP5tHS", "zA7rjjczM2wc", "R3ITWvldA-8", "jlTCUwQs5o7", "n5MpGK03q03", "0zzFP6kmU3A", "hlbVeC4nYsv", "KWtzlXKRzzM" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your prompt response!! I fully understand! Good luck!", " Thank you very much for your feedback. We will add the current discussion to our manuscript.\n\nIn fact, the convolution with stride 2 can also keep the one-to-one mapping by assuming that each feature location is mapped to the center of the convolution's receptive field. For example, let us assume the kernel size is $3\\times3$ and padding is $1$. The top-left location $(0, 0)$ on the feature maps is mapped to the top-left location of the input, i.e., $(0, 0)$; and the second point of the first row, i.e., $(0, 1)$, is mapped to $(0, 2)$ on the input. If you have more questions, feel free to post them here. Thank you very much!\n", " Thank you for your detailed response and their decision to release the code. Despite the concerns below, I will keep the current score. \n\nAssuming we get the downsampled feature map through max pooling, the one-to-one mapping between points and features is guaranteed as the author mentioned. However, if average pooling or convolution with stride 2 is used, I still think that multiple points can be mapped. Can you provide a further explanation?\n\np.s. Personally, I would like the current discussion and experiment related to the target assignment to be added to the manuscript or supplementary.", " If you have more questions, please let us know. Thank you for your time!", " Please let us know if you have more questions. Thank you.", " If you have further questions, please let us know.", " **Q1. How to handle the pixels each having multiple points?**\n\nThank you for your question. You have a precise understanding of our work!\n\nYes, a single pixel on the range image can have multiple 3D points due to the multi-round projection, which might belong to different objects. In this work, we only use the points in the *first-round projection* to compute the target assignments. If a pixel is assigned to multiple ground-truth boxes (this case is very rare in 3D object detection), the one with the minimum range view projection area is chosen. We did attempt to use the points from all rounds. However, both achieved a similar performance, as shown in the following table. The similar performance might be due to the fact that using points in the first-round projection is enough to compute target assignments and recall the target objects.\n\n| | mAP(%) | NDS(%) | Car | Truck | Bus | Trailer | C.V. | Ped. | Motor | Bicycle | T.C. | Barrier |\n|------------------|----------------|----------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|\n| all rounds | 58.18 | 64.27 | 83.4 | 53.4 | 67.2 | 33.3 | 19.6 | 83.7 | 60.6 | **37.8** | **73.0** | **69.7** |\n| first round only | **58.49** | **64.64** | **83.5** | **54.6** | **67.4** | **34.9** | **19.7** | **84.2** | **62.2** | 36.3 | 72.4 | **69.7** |\n\n**Q2. How to find the points corresponding to a given range view pixel?**\n\nAlthough each location $(x, y)$ on the FPN feature maps with downsampling ratio $s > 1$ can be mapped to multiple pixels in the range image with the original size, we prescribe it is mapped to one exact pixel $(xs, ys)$ on the range image. One pixel on the original range image corresponds to one 3D point (only the points in the first-round projection are considered). This 3D point is used to compute the targets assignments. Thus, when we deal with training targets, there is not the ambiguity that one pixel simultaneously corresponds to multiple points with different depths. Then, we obtain the corresponding 3D point at this location and check whether the 3D point is inside any ground-truth box. If any, the ground-truth box is assigned as the target of this pixel. Otherwise, the pixel is background. If a point falls in multiple ground-truth boxes, as mentioned before, the one with the minimum area is chosen.\n\n**Q2. Open-source the code and future works.**\n\nSure, we will definitely open-source our full code for active research on range views. We hope more researchers can pay attention to this direction. We will discuss the future works in the revision as well.\n\nThank you for your time in reviewing our work! If you have more questions, please let us know.\n", " **Q1. The ablation study of Table 3.**\n\nSorry, we really appreciate it if you could make your questions clearer. We clarify that all the items in Table 3 are with the multi-frame fusion, and the difference is the way to group the channels. \"Multi-frame\" means that the channels from multiple range view projections but with the same type are grouped together. For example, $x_0$, $x_1$, and $x_2$ are put in one group because they are all the x-coordinate of a point, where $0$, $1$, or $2$ is the index of the range view projection.\n\nIn addition, as noted in L287, all the inference time is measured with batch size 1 on a 3090Ti GPU. We do not use a sliding temporal window, and we only simply forward the range view images through the network. Additionally, the time differences in Table 3 (except the 1st row) are very small ($\\leq$1ms), which might be dominated by the underlying implementations instead of the network designs. Thus, we hope you can pay more attention to the model's performance here, and what we choose is the model with the best mAP and NDS.\n\n**Q2. Comparisons with the nuScenes leaderboard results.**\n\nThank you for your suggestions. We will add the leaderboard results to our paper.\n\nWe would like to note that, when comparing our model with the baseline model, we strictly controlled the settings and make them consistent. The performance of the baseline used by us is actually much better than what others reported. We use the CenterPoint implementation from the official MMDetection3D repository (https://github.com/open-mmlab/mmdetection3d/tree/master/configs/centerpoint), whose performance is similar to the one in the original paper. We further align the training settings with ours, which yields better performance (improved from mAP 57.63% to 60.40% on the val set) as shown in Table 6.\n\nIn addition, we want to note that the methods on the leaderboard often make use of model ensembles, test-time augmentation and other tricks (e.g., PointPainting) to attain good performance. Our method has no bells and whistles and can be easily deployed in practice. Also, the results on the leaderboard are not peer-reviewed.\n\nHope our responses can address your concerns. Thank you very much!\n", " **Q1: Direct comparisons with other range-based 3D detectors.**\n\nNone of these RV-based methods releases their full code. This makes it very challenging to compare ours and theirs due to the huge differences in training/testing settings, data augmentation, network architectures, pre-/post-processing etc. In addition, these previous range view detectors are NOT really fully convolutional, being much more cumbersome than ours. For example, both RCD and RSN are two-stage methods. RCD uses a second stage like RCNN to refine the initial box proposals, and RSN only uses the range view for points filtering in the first stage and still relies on BEV in its second stage. RangeDet exploits the Meta-Kernel technique, which dynamically generates the weights of convolutional kernels, hampering the on-device deployment (e.g., TensorRT/ONNX deployment). In contrast, our model is one-stage and only depends on the standard convolutions, thus being significantly simple and easy to deploy on self-driving cars in practice. Moreover, none of the existing range view methods can cope with the multi-frame fusion, which is one of our important contributions and was previously considered intractable in the range view.\n\nWe mainly compare our methods with mainstream BEV-based methods such as CenterPoint and PointPillar, showing that an RV-based method compares favourably with these popular BEV-based solutions even in the multi-frame case. These BEV works are used extensively in practice, and thus we think the performance competitive with theirs can demonstrate the effectiveness of our method. Finally, we will release our full code to facilitate the research of RV-based detectors.\n\n**Q2: Different performance compared with CenterPoint on the nuScenes test set and the val set.**\n\nThis is because the model size and training setting are different on the test set and the val set. As noted in L337-L339, we only use FCOS-LiDAR(c128) on the test set. The model on the val set is smaller and has only $64$ channels in its detection head. Moreover, for the experiments on the val set, the training/testing settings are strictly controlled to ensure a fair comparison between ours and CenterPoint. For the model on the test set, as noted in L339, we further use the \"fade strategy\" in [32] during training (i.e., removing the copy-paste data augmentation in the last 5 epochs). This can improve the performance by about 2% mAP. Additionally, the test set results of other methods are directly token from their original papers and there might be other subtle differences in the training/testing process. This is why our method shows better performance than CenterPoint on the test set.\n\n**Q3: The feature map of each level has to be resized to the original image size.**\n\nNo, we do NOT resize the feature maps of all levels to the original image size. As noted in L216, only the first level of feature maps has the same size as the original image size, and other levels are down-sampled by powers of $2$, respectively, as in the standard FPN. Thus, FPN is still needed.\n\n**Q4: Does random scale augmentation cause object artifacts?**\n\nAlmost not for two reasons. 1) We apply the random scale augmentation globally, i.e., all points in the same point cloud are proportionally scaled by the same scale factor at a time. As a result, this does not alter the azimuth and inclination angles of these points in the spherical coordinates system, and neither do the range view projections of these points. 2) We choose the scale factor in the range from $0.95$ to $1.05$, which only changes the point cloud by a small amount and thus will not cause object artifacts.\n\n**Q5. MRV's performance in multi-frame settings still falls behind BEV's.**\n\nYes, the multi-frame fusion in range view still is an open question and more efforts are needed. But our RV-based detector is much faster (RV 38.76ms vs. BEV 73.95ms). Ours with a latency similar to BEV-based CenterPoint can achieve a similar result.\n\n| | Time (ms) | mAP (%) |\n|-------------|-----------|---------|\n| CenterPoint | **74** | 60.40 |\n| Ours | 79 | **60.48** |\n\nAdditionally, we would like to highlight that we are the first one to show that range view detectors can also benefit from multi-frame fusion. The difficulty of multi-frame fusion is previously considered to be one of the critical obstacles to the RV pipeline, and none of the previous range view detectors shows any positive results. This impedes the practical application of the RV pipeline that has many unique merits (e.g., avoiding voxelization/sparse convolutions/being lightweight and etc.). Moreover, our work opens up many new possibilities for this promising direction. For example, we enable the RV-based detectors to predict the velocity of the objects with multi-frame fusion (AVE 0.301 vs. AVE 1.08 of single-frame models).\n\n\nThank you very much for reviewing our work. We sincerely hope you might reconsider the meaning of our work.\n", " This paper demonstrates a simple baseline range-based 3D detector for multi-frame inputs, which shows the potential way to achieve efficient 3D detection compared with current BEV pipelines. Experiments show the proposed method achieves comparable results with BEV methods such as CenterPoint with a faster speed. Pros:\n\n1. The paper writing is smooth and organized well.\n\n2. The paper comprehensively investigates some basic designs (3D inputs, network backbone, detection head) of range-based 3D detectors and compares the advantages and disadvantages of range-view detectors. Some interesting points such as multi-frame point encoding and modality-wise convolution are proposed and discussed. Overall, it provides a simple baseline approach for RV detectors.\n\nCons:\n\nThe paper lacks a direct comparison with other range-based 3D detectors. Further comparison with RangeDet or RCD or RSN (with the same multi-round range projection inputs) at the level of the operator could further enhance the paper statements, specifically in their speed/runtime comparison. If not, please give the explanations.\n 1. Please see the cons as above.\n2. Why FCOS-LiDAR(c128) gets better results than CenterPoint on nuScenes test set and worse results on the val set? \n3. As the feature map of each level has to be resized to the original image size, is it necessary to apply FPN to generate multi-level prediction? An ablation study compared with one-level (concatenation or summation) prediction can be provided.\n4. Random scale augmentation involved in training might affect all point angles in range view projection. Would it cause object artifacts during projection?\n Yes, in the conclusion and experiment parts. MRV's performance in multi-frame settings still falls behind BEV's. ", " In this paper, the authors propose a range-view CNN based 3D detector. Instead of using a bird-eye-view representation (like most of current models), the sensor data are encoded into a range view. It allows a simple multi-round range view to consider a temporal information. The network applies modality-wise convolution, a kind of depth-wise convolution adapted to the proposed encoding. Experiment shows that the simple proposed model performs well Strengths:\nEven if other range-view model already exists, the contribution of the paper is the combination of this kind of representation with an encoding of the data combining cartesian + spherical coordinates + a multi-round range model that uses modality-wise convolution. Modality-wise convolution is a nice idea in order to reduce the size of the model (like with depth wise) but using an inductive bias assuming that channels encoding different information can be split in the convolution.\n\nWeaknesses: \nRegarding the ablation study of table 3, it seems that the best combination for modality groups is multi-frame on each channel. However, the mAP is slightly higher (less than 1%). \n\nThere is another issue with the experimental part. The baseline models you compare with should be increased with the one given on the nuScene benchmark: https://www.nuscenes.org/object-detection?externalData=no&mapData=no&modalities=Any\n\nMoreover, It should be better to give the performances of the models reported on this page instead of giving the one reported in the original papers to have updated criteria. If you do that, you will see that your model is not SOTA. However, it remains simple and this is interesting in some applications. \n Regarding the ablation study of table 3: why do we have a higher time for Multi-Frame? Can you give more details about this? Do you use a sliding temporal window to compute one output for each time or do you compute some batches? /", " Because 3D detectors for autonomous driving need to perform quickly and have good detection performance in a limited computational environment, Voxel-based and BEV-based neural detectors have been suggested in existing works. However, the authors proposed a novel type of 3D detector that can maximize the utility of range view inputs. They additionally propose a multi-round range view projection to contain more points within a single range view image and the corresponding stem layer which is called modality-wise convolution. Their view of LiDAR points will have implications for other researchers as well, and the proposed method is sufficient to serve as a baseline for range-view-based 3D detectors in terms of performance.\n\nThe proposed module and its motivation are well aligned. In particular, the backbone network for range view was reconstructed based on resnet50, and the explanation of its motivation is very interesting.\n\nThe authors designed a range view-based neural network through multi-round range view projection (MRV) that minimizes the point loss that may occur in the process of converting to range view images, and a specified 2D CNN network for lidar points.\n\nHowever, when generating range-view images, a single pixel can contain multiple points as a range-view image is generated through multiple projections. In this regard, it is necessary to explain the target assignment process more clearly. For example, when two objects are assigned to one pixel, the authors need to describe how they are handled. \n **How to find the points corresponding to a given range view pixel?** I reckon that the method of finding a point corresponding to a pixel in a range view image with various scales is more complicated, unlike BEV. For feature maps that are smaller than the original size, there may be various options for how to determine the depth for a pixel. This is because points with different depths can be assigned simultaneously within one pixel in this case. In other words, it can introduce ambiguity when dealing with training targets. How you find the points that map to a pixel also affects dynamic allocation. Please provide a detailed explanation on this.\n\nAre the authors willing to release the code for active research on range views?\n They are aware of the dangerous uses of object detection. In addition, it would be good if future works parts were added for this field." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "0ehnKlMjd9i", "kVGGwqKBjBy", "R3ITWvldA-8", "R3ITWvldA-8", "hlbVeC4nYsv", "n5MpGK03q03", "KWtzlXKRzzM", "hlbVeC4nYsv", "0zzFP6kmU3A", "nips_2022_2-REuflJDT", "nips_2022_2-REuflJDT", "nips_2022_2-REuflJDT" ]
nips_2022_mMuVRbsvPyw
GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models
Prevalent semantic segmentation solutions are, in essence, a dense discriminative classifier of p(class|pixel feature). Though straightforward, this de facto paradigm neglects the underlying data distribution p(pixel feature|class), and struggles to identify out-of-distribution data. Going beyond this, we propose GMMSeg, a new family of segmentation models that rely on a dense generative classifier for the joint distribution p(pixel feature,class). For each class, GMMSeg builds Gaussian Mixture Models (GMMs) via Expectation-Maximization (EM), so as to capture class-conditional densities. Meanwhile, the deep dense representation is end-to-end trained in a discriminative manner, i.e., maximizing p(class|pixel feature). This endows GMMSeg with the strengths of both generative and discriminative models. With a variety of segmentation architectures and backbones, GMMSeg outperforms the discriminative counterparts on three closed-set datasets. More impressively, without any modification, GMMSeg even performs well on open-world datasets. We believe this work brings fundamental insights into the related fields.
Accept
This paper proposes to learn generative model (mixture of Gaussian) on the discriminative features. The proposed method achieves strong performance on semantic segmentation and it is capable of anomaly detection. The paper was reviewed by 4 reviewers. Reviewer o8w9 (rating: 5) pointed out 2 missing references and asked about speed. The authors clarified the difference between their work and the two references, and showed that the inference speed is negligible. Reviewer 5AWr (rating: 6) asked about assumption of uniform prior on classes. The authors explained that this is a common assumption, and added results on learned non-uniform class prior. Reviewer TvcJ (rating: 5) asked about the motivation for the Gaussian mixture model, the meaning of the mixture components, computational overhead, comparison with MRF prior. The authors addressed the questions in detail and added new results. The reviewer read the authors' rebuttal and raised the rating to the current level 5. Reviewer 1Te2 (rating: 7) wrote a very detailed review and was mostly satisfied with the authors' rebuttal, although this reviewer was still not entirely sure about joint training. --------------- Overall, for the two reviewers with ratings 5, their concerns did not suggest serious flaws of the paper, and the authors addressed their concerns satisfactorily. I thus lean toward accepting this paper.
train
[ "FpK1jHARzm", "dZv69INwaq9", "6Y3_J9izJt", "trUCSZXSz_N", "QcPgyaP16z9", "-i6Q_nhJlSa", "01hMxt2Gg5j", "DpNAr3Q4Lgs", "RLnRE_gP4ji", "XJBy0FOo9xl", "BZ_iHsk8wS1", "VgN_x5Jk193", "hUSxQb3KhMj" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " This response clarifies some big things, and I continue to recommend acceptance.\n\nI'm still not sure I follow exactly how joint training is being done, but certainly glad that it is, and I think this comments gives some more hints. Based on this rebuttal, I think there's a good chance that it will be clearer in the final version.", " I read the authors' response and other reviewers' comment and changed my rating to borderline accept ", " Dear reviewer,\n\nIn our previous responses, we have tried our best to address your concerns. \n\nIf you have any further comments or questions, please let us know. Thanks. \n", " \n#### **Q3. Extension to image classification** \n\n**A3:** Totally agree. The main reason is due to our limited GPU resources. But luckily, we just got some GPUs and produced some results. From our current very initial results, we indeed observe consistent performance improvement on top of the VGG, on the standard ImageNet dataset, by training from scratch. We thus believe our idea is powerful and principled. Extending GMMSeg to image classification is definitely our next focus.\n\n\n---\n\n#### **Q4. Clarify presentation of feature extraction**\n\n**A4:** Thanks for your careful review! In the original submission, we keep notations simple, straightforward, and easy to follow. Some innocuous simplifications are also introduced to fit the page limit. After reading your comment, we feel the related statement may cause misleading. We will clarify the writing with a footnote: \n\nIn segmentation, dense feature extractor $f_{{\\theta}}$ maps pixel samples with image context, *i.e.*, $f_{{\\theta}}: \\mathcal{R}^{H\\times W\\times 3}\\rightarrow \\mathcal{R}^{H\\times W\\times D}$. We simplify the notations into pixel form, *i.e.*, $f_{{\\theta}}: \\mathcal{R}^{3}\\rightarrow \\mathcal{R}^{D}$, to keep a straightforward formulation. \n\n---\n\n\n#### **Q5. Questions on 'Ideal' EM parameter estimation with memory**\n\n> Q5.1. After training, and in inference are the EM parameters simply those from the last training iteration?\n\n**A5.1:** Yes.\n\n> Q5.2. This'll still be based on a smaller memory bank, how far off from \"ideal\" EM parameters could these be?\n\n**A5.2:** The estimated parameters are not only based on the memory, but moving statistics over the entire training dataset thanks to the momentum EM mechanism (*c.f.*, Ln248-Ln251). The momentum mechanism stabilizes the training (*c.f.*, Ln251-252) and also helps us collect statistics over the whole dataset without a complicated post-process.\n\n> Q5.3. The supplement says that a large enough memory bank represents the overall distribution, is this assertion based only on the fact that the performance has saturated (up to 3 digits) at 32K?\n\n**A5.3:** Thanks for your careful review. Our assertion is NOT only based on the fact that the performance has saturated at 32K. It is commonly agreed that there is much redundant information for this pixel classification task, *i.e.*, many neighbor pixels are very similar. This is also verified by some sampling based training strategies in this field, such as [ref1] PointRend: Image Segmentation as Rendering.\n\nBut this assertion might need careful examination; if we can have a large enough memory that can store the whole training dataset, maybe we can achieve better performance. However, this further brings the concern on the trade-off between performance and training resource cost. \n\n---\n\n\n#### **Q6. Minor comments/typos**\n\n**A6:** We are grateful for your careful review! We apologize for these errors and will definitely correct them. \n\n---\n\n#### **Performance comparison with [d] \"Dense anomaly detection by robust learning on synthetic negative data\"**\n\nThanks for summarizing the differences between our algorithm and [d], and admitting our contribution to the field of anomaly detection. \n\nHere we respectfully remind the reviewer that the comparison to [d], *i.e.*, 63.43, vs 43.47, is a little bit unfair. In addition to introducing synthetic anomalies during training, [d] adopts a trick (Sec. 5.2, page 14 of [d]): \"... a variant of our method which focuses on the ground during inference (GF stands for ground focus). We define the ground as a convex hull which covers all pixels predicted as the class road or the class sidewalk. This change entails a huge improvement in both metrics\". We personally do not admire such a strategy, as this is just using the data bias of the Fishyscapes dataset. It is not held in general cases: the anomaly may appear everywhere in the view. \n\nWithout such GF trick, [d] only gains 39.4 AP (Table 2), which is inferior to ours (43.47). Without neither architectural change (like [34–37]), nor re-training (like [38–40]), nor post-calibration (like [17,18,41–46]), our algorithm yields very promising results on anomaly detection. We believe our work brings fundamental insights into this field. And, as you noticed, this is only one aspect of our contributions.\n\n---\nFinally, thank you again for your very detailed and constructive comments, from which we really learn something!", " Thank you for recognizing the promising aspect of our work and providing valuable suggestions to help us improve clarity. We reply to the concerns and questions below. Our responses shall be incorporated into the revision.\n\n#### **Q1. Comparison and relation to CRF-based methods**\n\n**A1:** Thanks for your careful review. \n\nFirst, although CRF shares some advantages with GMMSeg, previous CRF-based segmentation models are still largely built upon discriminative softmax classifier, and cannot produce well-calibrated uncertainty estimations, hence struggling for handling OOD. For example, CRF-based models [2,61,e] contain the unary energy component, which is commonly modeled with negative label assignment probability $p(c|x)$ (model output after the softmax layer) [2,e] or logits (model output before the softmax layer) [61]. CRF-based models still rely on the softmax to parse the probability in a discriminative way. The 'sin' of softmax remains in these models (*c.f.*, Ln31-Ln36). From this aspect, GMMSeg earns superiority.\n\nSecond, CRFs contain pairwise energy components, which model the pairwise priors as you mentioned. From this aspect, GMMSeg lacks the explicit probabilistic modeling of \"context\". However, modern network designs for segmentation models have already implicitly or explicitly captured the correlations among pixels during deep feature extraction (*i.e.*, CNN for gathering small local context, ASPP, and neural attention for long-range modeling). As a very early step towards a generative model based on GMM for image segmentation, our work also comes with a few intriguing questions, and this issue is one of them.\n\nThird, to better address your concern, we provide the comparison experiments on top of DeepLabV3+-ResNet101, on the ADE20K dataset as summarized in the below table. As seen, GMMSeg boosts performance. However, CRF post-processing even brings a negative impact. This is also one of the reasons that CRF post-processing is less used in current high-performance segmentation models. \n\n| | mIoU |\n| ----------------------------------- | ---- |\n| DeepLab$_{\\text{V3+}}$ | 44.6 |\n| DeepLab$_{\\text{V3+}}$ + CRF | 44.1 |\n| **GMMSeg-DeepLab$_{\\text{V3+}}$** | **46.0** |\n\n---\n\n#### **Q2. Hybrid training of discriminative feature extractor and generative classifier**\n\n**A2:** Sorry for this misunderstanding. \n\nFirst, an appealing characteristic of our GMMSeg is that, the feature extractor is trained in a discriminative way, but the computation of the discriminative learning loss (Eq. 11) relies on the generative GMM classifier. Or more specifically, the posterior $p(c|x)$ is derived from the GMM. In turn, the EM optimization of GMM is conducted on the learned feature embedding space. Thus our method achieves a dense blend of generative optimization and discriminative training; the optimizations of EM and feature extractor are densely coupled. This allows our GMMSeg, as we repeatedly mentioned in the manuscript, to inherit the advantages of the two worlds. \n\nSecond, as optimizations of EM and feature extractor are densely coupled, GMM classifier and feature extractor are both gradually updated and aligned with and adaptive to each other, making GMMSeg a compact model.\n\nThese are also two crucial reasons why we cannot learn the GMM after the feature extractor is fully trained. \n\nMoreover, if the GMM was simply ignored, that means a softmax classifier is still needed to train the feature extractor. Actually, we have already conducted experiments with such a naïve strategy, at the very beginning of this project. We observed a clear performance drop, even compared with the original softmax-based baseline. We provide the experimental results below. We denote learning GMMs from a fully trained (with softmax) NN space as `DeepLabV3+ + GMM`. Compared to `DeepLabV3+ + GMM`, an end-to-end trained GMMSeg-DeepLab$_{\\text{V3+}}$ shows a significant improvement, which in turn supports our claim in Ln65-66. \n\n| | mIoU |\n| ----------------------------- | ---- |\n| DeepLab$_{\\text{V3+}}$ | 44.6 |\n| DeepLab$_{\\text{V3+}}$ + GMM | 31.6 |\n| **GMMSeg-DeepLab$_{\\text{V3+}}$** | **46.0** |\n\n\nThe joint optimization of the generative classifier and discriminative feature embedding is one of the most exciting parts and contributions of this work. And, this is one of the keys to our promising performance. First training a feature extractor and then replacing the softmax classifier with a GMM cannot get good results. \n\n---", " \n#### **Q3. Visualization of learned components**\n\n**A3:** We visualize the learned components in Sec. S2 in the revised *suppl.*. It shows the probability of pixel assignments with $M=3$ mixture components for each class. Different components are illustrated by different colors (*i.e.*, red, green, blue). For each pixel, the highest probability of being assigned to the component is visualized using the corresponding color. As shown in Fig. S1, GMMSeg can automatically discover informative patterns in the class. \n\n---\n\n#### **Q4. Overhead in training/inference**\n\n**A4:** Sorry for this confusion. We have discussed these issues. In the Limitation Analysis section (*c.f.*, Sec. S4 in the *suppl.*), we report the training computational overhead:\n\n> One limitation of our approach is that the EM based generative parameter estimation needs extra optimization loops in each training iteration which would reduce the training efficiency in terms of time complexity. However, in practice, we find one EM loop per training iteration is good enough for global model convergence, which only brings a minor computational overhead, *i.e.*, ~5% training speed delay.\n\nFor the inference, the delay is almost negligible (Ln278). We report the detailed inference speed in the below table. The same setup to the ablations is adopted, where the DeepLab$_{\\text{V3+}}$-ResNet101 architecture is used. Speed is measured on a single NVIDIA GeForce RTX 3090 GPU.\n\n| | fps |\n| ----------------------------- | ----- |\n| DeepLab$_{\\text{V3+}}$ | 14.16 |\n| GMMSeg-DeepLab$_{\\text{V3+}}$ | 13.37 |\n\n---\n\n#### **Q5. Comparison to CRF post-processing; ''*Both GMMSeg and CRF improve performance at the cost of computation overhead*''**\n\n**A5:** First, GMMSeg introduces minor computational overhead at training (~5% training speed delay). Once trained, the model shows comparative inference speed (*c.f.*, `Q4. Overhead in training/inference`) to the counterpart. \n\nSecond, to the best of our knowledge, GMMSeg is the first semantic segmentation method that reports promising results on both closed-set and open-world scenarios by using a single model instance. It shows exciting results on anomaly segmentation where the discriminative CRF approaches suffer. We believe GMMSeg brings orthogonal contributions to the community. \n\nThird, to address your concern, we provide below the comparison results on top of DeepLab$_{\\text{V3+}}$-ResNet101, on the ADE20K dataset. As seen, GMMSeg yields much better performance, compared with CRF post-processing. The latter even brings a negative impact. This is also one of the reasons that CRF post-processing is less used in current high-performance segmentation models. \n\n| | mIoU |\n| ----------------------------- | ---- |\n| DeepLab$_{\\text{V3+}}$ | 44.6 |\n| DeepLab$_{\\text{V3+}}$ + CRF | 44.1 |\n| **GMMSeg-DeepLab$_{\\text{V3+}}$** | **46.0** |\n\n---\n\n#### **Q6. Oversimplified to model segmentation as pixel classification; Missing modeling of pixel correlations**\n\n**A6:** First, many previous methods formulate semantic segmentation as *pixel classification* [1]. \n\nSecond, modern network designs for segmentation models have already implicitly or explicitly captured the correlations among pixels during deep feature extraction (*i.e.*, CNN for gathering small local context, ASPP, and neural attention for long-range modeling). \n\nThird, even with such an elegant model, consistent performance gains can still be observed, which exactly demonstrates the power of our idea and the novelty of this work. \n\nFourth, as a very early step toward a generative model based on GMM for image segmentation, our work also comes with a few intriguing questions, and this issue is one of them. \n\n---", " Thank you for your time and valuable feedback. Below we address your concerns with additional experiments and discussions. We shall incorporate the responses into the revision.\n\n#### **Q1. ''Why not directly model the joint distribution of image and label maps using deep networks?''**\n\n> Q1.1. The motivation of using GMMs on top of a deep network is not very clear.\n\n**A1.1:** GMMs can express almost arbitrary continuous distributions and is among the most famous probabilistic generative models (Ln114). The hybrid framework, *i.e.*, using GMMs on top of a deep representation embedding network, endows GMMSeg with the strengths of both discriminative and generative models: \n\n1. The discriminative representation extractor offers *expressive* feature representations as demonstrated by extensive experiments in Sec. 4.1.\n2. The generative classifier allows GMMSeg to capture the multimodality of data, to be well-calibrated, and to naturally reject the abnormal inputs without any modification, as demonstrated by experiments in Sec. 4.2.\n\nPlease also refer to Ln58-Ln60, Ln63-69 in the Introduction section, and Ln103-121 in the Related Work section of the main paper, and Sec. S2 in the *suppl.* for detailed discussions on our motivation.\n\nLn58-Ln60: *'GMMSeg smartly learns generative classification with end-to-end discriminative representation in a compact and collaborative manner, exploiting the benefit of both generative and discriminative approaches.'*\n\nLn63-69: *'... with the hybrid training strategy – online EM based classifier optimization and end-to-end discriminative representation learning, GMMSeg can precisely approximate the data distribution over a robust feature space... the distribution preserving property allows GMMSeg to naturally reject abnormal inputs, without neither architectural change (like [34–37]) nor re-training (like [38–40]) nor post-calibration (like [17, 18, 41–46]).'*\n\nLn103-121: *'... discriminative classifier is used exclusively [24], due to its simplicity and excellent discriminative performance ... generative classifiers are widely agreed to have several advantages ... accurately modeling the input distribution, and explicitly identifying unlikely inputs in a natural way'*\n\n> Q1.2. Why not directly model the joint distribution of image and label maps using deep networks?\n\n**A1.2:** \n\n1. Directly modeling the joint distribution of very high-dimensional image data and label maps is more challenging, which is even a little beyond the scope of this work. We would like to take this as a part of our future work. \n\n2. GMMSeg is fully compatible with modern segmentation (dense feature extraction) network architectures. It can replace the discriminative softmax seamlessly, and thus can take the full benefits from the development of segmentation network architecture design (*i.e.*, dense feature extractor).\n\n\n---\n\n#### **Q2. Clarification of GMM component**\n\n> Q2.1. The meaning of the components of GMMs in the proposed framework is not very clear.\n\n**A2.1:** Each mixture component can be viewed as a representative pattern that resides in the feature space, discovered in an EM, data-driven manner. Please also refer to `Q3. Visualization of learned components` for illustrative examples. \n\n> Q2.2. It looks like all classes have the same number of components.\n\n**A2.2:** We want to avoid introducing delicate designs like this to make our framework simple, general, and elegant. And we also show such a simple strategy is enough to achieve impressive performance. Actually, it is quite simple for us to use different numbers of components for different classes, based on the occurrence frequency of class samples. \n\nIn addition, the widely used discriminative softmax classifier only learns one single weight vector for each class, totally ignoring the multimodality nature of data and the task. Considering our method makes an initial step towards a generative mixture model based on GMM for image segmentation, and its good properties of multimodality modeling, treating “all classes have the same number of components” as a weakness of our work seems unfair. \n\n> Q2.3. ... and every component is expected to have the same amount of pixels.\n\n**A2.3:** Sorry for this misunderstanding. Here we clarify that the equipartition assumption is *NOT* a hard constraint. It is totally OK (and always) that different clusters are assigned with different numbers of data samples. The equipartition assumption, widely used in clustering, is just a soft constraint for avoiding the degenerate solution, *i.e.*, all data samples are partitioned to a single cluster, as pointed out by [107,108]. \n\n---\n", " Thank you for your encouraging comments. Below we address the raised point with additional discussions that we will include in the final version. \n\n#### **Q. The assumption of uniform distribution of p(c)**\n\n**A:** Here the uniform prior is adopted because people usually do not have a strong belief on the class distribution beforehand, as we cannot identify the prior distribution of infinite incoming data in real-world applications [ref1]. On the other hand, the uniform prior is quite simple and widely used in related fields [ref2]. \n\nBut we also agree that some other choices like counting the class frequency in the training set as a prior should be investigated. However, the study of alternative class priors is a little beyond the scope of this paper and would potentially have a negative impact on comparison fairness (as most previous segmentation methods implicitly adopt the uniform prior). \n\nTo better address your concern, we report below the performance obtained by using the class frequency in the training set as the prior. We observe a slight performance drop, *i.e.*, - 0.4% in terms of mIoU with DeepLab$_{\\text{V3+}}$-ResNet101 architecture on the ADE20K dataset. \n\n| | mIoU |\n| -------------------------------------------------------- | ---- |\n| GMMSeg-DeepLab$_{\\text{V3+}}$ (Uniform Prior) | 46.0 |\n| GMMSeg-DeepLab$_{\\text{V3+}}$ (Training Occurrence Prior) | 45.6 |\n\n\nAs a very early step towards a generative model based on GMM for image segmentation, our work comes with a few intriguing questions, and this issue is one of them.\n\n[ref1] Bayesian data analysis. In Chapman and Hall/CRC, 1995.\n\n[ref2] A Bayesian Hierarchical Model for Learning Natural Scene Categories. In CVPR05.\n\n---", " Thank you for your time and valuable feedback. All the comments and questions are addressed below. We will incorporate our responses in the revision.\n\n#### **Q1. Missing reference [a, b]**\n\n**A1:** Thanks for bringing these two excellent works to our attention. We are happy to discuss and compare [a, b] in our final version. Though both [a] and [b] consider data distribution p(*pixel feature*|*class*) to some extent, they still rely on the *discriminative* softmax classifiers. In addition, [a] requires three fragile phases for training. However, our algorithm discards softmax classifiers from the beginning. More importantly, our method demonstrates it is indeed possible to train simultaneously a generative classifier with deep feature representation, and even shows better performance in both closed-set and open-world settings. \n\n---\n\n#### **Q2. Effect of inference speed**\n\n**A2:** The impact of inference speed is almost negligible (Ln278). We keep the same setup to the ablations, where the DeepLab$_{\\text{V3+}}$-ResNet101 architecture is used. Speed is measured on a single NVIDIA GeForce RTX 3090 GPU.\n\n| | fps |\n| ----------------------------- | ----- |\n| DeepLab$_{\\text{V3+}}$ | 14.16 |\n| GMMSeg-DeepLab$_{\\text{V3+}}$ | 13.37 |\n\n\n---", " This paper studies semantic segmentation. The authors developed a Gaussian Mixture-based Generative Semantic Segmentation Model. The experimental results on serval datasets demonstrate the effectiveness of the proposed method. [Strengths]\n+ Compared to the baselines, the proposed methods could bring constant improvements \n+ The abundant experiments are introduced to prove the effectiveness of the proposed method.\n\n[Weaknesses]\n- some important references are missing.\n[a] Top-down Learning for Structured Labeling with Convolutional Pseudoprior, ECCV 2016.\n[b] Exploring Cross-Image Pixel Contrast for Semantic Segmentation, ICCV 2021.\n\n- Both [a] and [b] introduce data distribution p(pixel feature|class), they all need to be included in the discussion. [b] also learn the class-related feature and bring constant improvements. It could be better to add it into comparison.\n What's the effect of the proposed method on the model inference speed? Yes", " This paper presents a novel paradigm for semantic segmentation. Specifically, the authors propose to frame the segmentation task as a generative model by modeling the probability of p(x|c) as a mixture of Gaussians. This is rather different from the popular discriminative model that adopts Softmax. + interesting idea\n+ novel paradigm, as compared to a large number of semantic segmentation models. \n\n- the assumption of p(c) = 1/C might not be very reasonable. Normally, the prior distribution of different classes is not a uniform distribution. \n - what about using a different p(c) ? - the assumption of uniform distribution of p(c) is not very reasonable. ", " This paper proposes GMMSeg, a generative model based on GMM for image segmentation that models the joint distribution of pixel features and classes. For each class, GMMSeg builds GMMs via EM so as to capture class-conditional densities. A deep network is trained end-to-end in a discriminative manner to extract features for the GMMs. This endows GMMSeg with the strengths of both generative and discriminative models. With a variety of segmentation architectures and backbones, GMMSeg outperforms the discriminative counterparts on three closed-set datasets, ADE20K, Cityscapes and COCO. Without any modification, GMMSeg even performs well on open-world datasets. Strength:\n1. It is reasonable to use generative classification models with discriminative feature learning to combine the strengths of both generative and discriminative models. \n2. The properties of GMMs make GMMSeg well adapt to multimodal data densities and allows GMMSeg to naturally reject abnormal inputs.\n3. GMMSeg is fully compatible with different network architectures. \n4. Extensive experiments are conducted on multiple benchmark datasets. It shows that the proposed method improves the performances of multiple baselines. \n\nWeakness:\n1. The proposed generative framework is complex. The motivation of using GMMs on top of a deep network is not very clear. Why not directly model the joint distribution of image and label maps using deep networks?\n2. The meaning of the components of GMMs in the proposed framework is not very clear. It looks like all classes have the same number of components and every component is expected to have the same amount of pixels. However, the modality of different classes and the pixels of different components are not always the same. \n3. It would be more convincing if there are some discussion on what each components represent and a visualization of the learned components. \n4. Compared with the baseline, GMMSeg brings more overheads at both inference and training time. There should be a comparison in terms of training/inference speed. it will be more convincing if the proposed method is compared with baseline+CRF post processing, as both GMMSeg and CRF improve performance at the cost of computation overhead. \n5. It is oversimplified to model segmentation as pixel classification using GMMs as it does not handle the correlations among pixels in an image. It might be more reasonable to use MRF, which has been explored in previous work, e.g. [a] \n[a] Liu, Ziwei, et al. \"Deep learning markov random field for semantic segmentation.\" IEEE transactions on pattern analysis and machine intelligence 40.8 (2017): 1814-1828.\n\n Please refer to the Strengths And Weaknesses section The authors addressed the limitations and potential negative societal impact of their work", " This paper proposes using a Gaussian Mixture Model (GMM) at the end of semantic segmentation models. The EM component maps the representations produced by a fully-convolutional neural network to the class labels and their probabilities. The GMM is learned simultaneously with the neural network; its parameters are optimized with Sinkhorn EM to fit the features from a running window over multiple training iterations. The authors show that this performs well both on classical semantic segmentation benchmarks and on \"open world\" tasks where anomalous/out-of-distribution objects are introduced in the test set. ## Strengths\n\n### i) Produces a generative, probabilistic model of the CNNs representations and the output classes.\n\nA key part of the segmentation model, the final mapping from neural representations to class probabilities and labels, is done with an interpretable and well-calibrated model. A lot of value comes from the model producing a probability distribution, for interpretability, reliability, calibration, etc. Some reasonable estimation of uncertainty is important when using neural networks within a larger system (e.g. in a commercial product) where one might need to handle uncertain predictions differently, as opposed to the neural network being used in isolation (e.g. to maximize scores on a benchmark). This work contributes significantly to literature focused on achieving good uncertainty estimates in neural-net-based-models [a,b,c], and this is only one aspect of the contributions.\n\nUsing a GMM in this way produces a well-calibrated uncertainty estimate. The authors allude to this with the 3rd limitation of softmax described in the introduction. Reliability diagrams are given in the supplement (Fig S1).\n\n### ii) Good choice of techniques for jointly training a generative model with a CNNs, well-validated by ablation studies.\n\nAdding new types of layers, including ones inspired by non-deep-learning models, is a common subject of a paper. This work does a more principled treatment of this than usual. First, the GMM model is learned with a choice of algorithm (Sinkhorn EM) that will produce consistent results in few iterations, which seems especially important when this model fit is repeated throughout the overall SGD training. Use of a memory bank is also a good improvement to this.\n\nThe value of Sinkhorn EM is validated by ablation study. The impact of the memory bank is also studied in the supplement.\n\n### iii) Good performance on anomaly detection.\n\nThe scores reported in the submission would not put it at the top of Fishyscapes leaderboard (https://fishyscapes.com/results). For example, as of writing the best AP score on the Fishyscapes lost & found is 63.43, vs 43.47 reported by the submission in Tables 2 & 3. However, this seems quote good for a method that isn't directly trained for anomaly detections. like the top published results in Fishyscapes. For instance, the current Fishyscapes leader [d] introduces synthetic anomalies when training a network. Instead, in the submission good anomaly detection can be extracted from a GMMSeg model trained on ordinary segmentation tasks.\n\nThis seem like a good demonstration of interpretability. As much as \"interpretability\" is an ambiguous term, the ability to infer new information from the model to use in other contexts seems like a useful example.\n\n## Weaknesses\n\n### iv) Less clear that it's an improvement over using CRF-based models in the same place, or why.\n\nThe authors contrast their against using softmax directly on the CNN features. They provide good reasons why GMMSeg would better model this problem. It seems like CRFs share a lot of the same advantages, and the contrast between GMMSeg and those methods is less obvious.\n\nThe original V1 of DeepLab [2], and several follow-up works [e,61] use CRFs to map the NN representation to class probabilities. These are cited in the related works, but the probabilistic modelling aspect of these works is not really discussed. Experimental comparisons against DeepLab are done with V3+ [47], a more recent version that dropped the CRF component.\n\nGMMSeg also lacks the \"context\" priors that CRFs give, due to the pairwise (or higher-order) potentials included in those models.\n\n### v) Not a fully combined optimization of EM and feature extractor.\n\nIn Section 3.2, the authors clarify:\n\n> the extractor’s parameters θ are only updated by the gradient back-propagated from the discriminative loss, while the GMM’s parameters are only optimized by EM\n\nI'd interpret this to mean that the feature extractor, which remains an important part of the model in GMMSeg, is still trained in a discriminative way that doesn't differ from previous methods. And the NN features are not optimized for input to the GMM component.\n\nGiven that this is the case, it's unclear why the EM needs to be learned at the same time as the feature extractor. Would there be any change to the training of the feature extraction NN if the GMM was simply ignored? And if so, could the GMM be learned *after* the NN is fully trained, without any change in results?\n\nThe authors make good design choices for an end-to-end joint optimization (iii), but stop short of doing so. However, empirical results do seem to show that the GMM still describes a better mapping from the NN features to the class labels than the original softmax that the NN was trained with.\n\n### vi) Only considers segmentation.\n\nAll of the authors' justifications for using a GMM to map representations to class labels seems like they'd apply equally well to classification, or any deep network trained with softmax. I wonder why the paper focuses on image segmentation exclusively, and it does make it a less widely-impactful work.\n\n### vii) Unclear presentation of feature extraction component.\n\nAdjacent pixels are usually important in segmentation, the feature in the classification problem is not really the $x_n \\in R^3$ as described in Section 3.1. The same is true for the formulation of the \"dense feature extractor.\" I'd emphasize that the feature extractor really maps $R^{H \\times W \\times 3}$ to $R^{H \\times W \\times D}$.\n\n[a] Oh et. al. \"Modeling uncertainty with hedged instance embedding\" ICLR 2019. \\\n[b] Caesar et. al. \"Joint Calibration for Semantic Segmentation\" BMVC 2015. \\\n[c] Nado et. al. \"Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning\" \\\n[d] Grcić et. al. \"Dense anomaly detection by robust learning on synthetic negative data\" \\\n[e] Vemulapalli et. al. \"Gaussian Conditional Random Field Network for Semantic Segmentation\" CVPR 2016. - After training, and in inference are the EM parameters simply those from the last training iteration? This'll still be based on a smaller memory bank, how far off from \"ideal\" EM parameters could these be? The supplement says that a large enough memory bank represents the overall distribution, is this assertions based only on the fact that the performance has saturated (up to 3 digits) at 32K?\n\n### Minor comments/typos:\n\n * 68: Forth -> Fourth\n * 86: Suggest \"modify [the] FCN architecture\"\n * 88: \"latter\" or \"later\"?\n * 94: \"modeling [the] underlying data distribution\"\n * 181: Is \"hard\" the right adjective here? Maybe the grammar would be better with \"...why it is hard for existing segmentation models to...\"\n The limitations seem to be discussed reasonably well. The submission focuses on a problem that is well-described by benchmarks, and the it demonstrates that the method is likely to perform better than competitors on out-of-distribution or anomalous examples.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "QcPgyaP16z9", "-i6Q_nhJlSa", "VgN_x5Jk193", "hUSxQb3KhMj", "hUSxQb3KhMj", "VgN_x5Jk193", "VgN_x5Jk193", "BZ_iHsk8wS1", "XJBy0FOo9xl", "nips_2022_mMuVRbsvPyw", "nips_2022_mMuVRbsvPyw", "nips_2022_mMuVRbsvPyw", "nips_2022_mMuVRbsvPyw" ]
nips_2022_9aLbntHz1Uq
Counterfactual Fairness with Partially Known Causal Graph
Fair machine learning aims to avoid treating individuals or sub-populations unfavourably based on \textit{sensitive attributes}, such as gender and race. Those methods in fair machine learning that are built on causal inference ascertain discrimination and bias through causal effects. Though causality-based fair learning is attracting increasing attention, current methods assume the true causal graph is fully known. This paper proposes a general method to achieve the notion of counterfactual fairness when the true causal graph is unknown. To select features that lead to counterfactual fairness, we derive the conditions and algorithms to identify ancestral relations between variables on a \textit{Partially Directed Acyclic Graph (PDAG)}, specifically, a class of causal DAGs that can be learned from observational data combined with domain knowledge. Interestingly, we find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided: the sensitive attributes do not have ancestors in the causal graph. Results on both simulated and real-world datasets demonstrate the effectiveness of our method.
Accept
This paper has divergent views in the sense two reviewers have given positive assessments (6 and 7) while the other reviewer has given a negative assessment (score of 3). This paper also had very 'heavy' discussions between the reviewer with negative opinion and the authors. First of all I would like to thank the reviewer involved in patiently discussing with the authors dedicating valuable personal time. Let me start with the aspects all reviewers *more or less agree* on : a) The main technical piece is an efficient algorithm and provable guarantees for identifying definite non-descendants and definite descendants from an MPDAG - maximum partially directed acyclic graph - the equivalence class of Causal DAGs one obtains after incorporating any arbitrary side information. Previous such results were known for CPDAGs and they don't carry over to MPDAGs. Therefore it is a non trivial result (specifically Lemma 4.4 ). So all reviewers agree that finding definite non descendants in Equivalence classes that also include side information is a very solid contribution. b) The aspect in which reviewers had divergent opinion is this: the paper's claim to be able to train counterfactual fair classifiers leveraging the result from [Kusner et. al 2017] that any function of non-descendants is counterfactually fair. One of the reviewer's strong contention is that in most fairness datasets, most variables that are highly predictive of outcomes will also be downstream of sensitive attributes like race etc.. and therefore relying only on non-descendants is not exactly a realistic application. Authors cited their empirical structure learning results that show very few descendants and comments from Kusner et. al 2017 paper to bolster their case. Reviewer responded by citing alternate statements from the same paper etc.. *My opinion* is that in a specific context when fairness with respect to a specific sensitive attribute is desired, there are also often other features that has no causal relationship with the sensitive attribute but has a *correlation* (Examples include age and race, race and gender etc.. ). To cite a recent reference please see Example 15 in https://arxiv.org/pdf/2207.11385.pdf (this reference is recent and I am *not* expecting authors or anyone else to have known this - it is just to demonstrate the point). The example shows *testable* correlations between sensitive attributes and non-descendants in COMPAS and Adult datasets. This shows that a) neither causal sufficiency nor b) the non-existence of non-descendants are realistic . In fact, spuriously related non-descendants give rise to spurious bias which may not be an object of correction for fairness (broadly speaking). This shows that causal sufficiency is a strong assumption (as authors have assumed) and also non-descendants do exist. c) Another point to be noted is that Kusner et. al. 2017 do consider confounded models unlike the authors. Once you view exogenous, endogenous (observed) variables and sensitive attribute as one full deterministic system, their point is ALL exogenous + non descendant endogenous variables are "non-descendants" topologically and therefore could be used. They did not imply non-descendants endogenous 'only' as the authors contend in their discussions. In fact, the algorithm section in Kusner et. al. 2017 - advocates for sampling Exogenous from some side information (level 2 and 3 information) and forming a predictor as a function of exogenous *and* non descendant endogenous variables. Therefore, reviewer has a valid point on the discussed aspect as well. Authors may want to pay attention to this. *In summary*: Authors' contention that Kusner et al 2017 paper advocates for non-descendant endogenous as their main sufficient criterion appears to be not exactly correct. However, non-descendants and their confounding with sensitive attribute is a more realistic model. However, authors core technical structure learning contribution is also noteworthy. If this line of work is to be pursued where one could find non descendants even under limited confounding (between sensitive attributes and non- descendants - a mild violation of causal sufficiency) - it would be a step towards obtaining counterfactually fair classifiers (although even such a classifier would have to sacrifice a lot on accuracy depending on how many descendants one observes). However, even positive reviewers have opined that main strength of the paper is a solid structure learning result that identifies non-descendants in a fully observational setting. *Recommendation*: In the spirit of not blocking valid ideas that are fundamental and also the fact that one cannot always make the weakest set of assumptions to make progress, I tend to favor acceptance. A *very strong* suggestion to authors - I would place structure learning as the centerpiece and motivate it by a need to learn non-descendants (in the general sense) motivated by Kusner et al 2017. Authors also need to highlight Fair relax - a relaxation that they have proposed that uses possible descendants and definite non descendants to predict - it seems to be closer than other approaches to counterfactually fair one and therefore removing the singular focus on (as the discussions would have one believe) only observed definite non-descendants.
test
[ "jVND2YMX9GK", "N6sBmkSaWFm", "9HLwOOjuj_7", "X04VQkrm_fk", "7moC7oAvCtP", "KrkrH4eYgA", "zuKOiCzhUcK", "4wbAzBo3XVm", "84DZ6azLia", "E955xneNcfi", "Pa6m_my4XGP", "7y40m6iw65k", "y4PfFI5opj", "9iETKhzjKM8", "uxf-NCgHVx", "xwE7x6ctgNG", "-6ihAznOPKZ", "pfnAf1HSATk", "3bsovDta5sq", "0gRm0DRkiXw", "RNLTSPxJ64F", "ZFyVul-ZnWt", "2RIL0Tx2iW", "1I-nkCQnnx6", "dG6JZ05zui", "1yUao1HJSRW", "KPQyiOSdSKq", "nLNLcqz7lpl", "1lnID7OH7hc", "aj5YJA8cgkn", "xDaqcLe0za", "bCP__rlpsp" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Error terms (or latent variables) are never observed, they are estimated. The level 2+ methods do that by explicitly using the sensitive attributes (and, depending on the structure, their descendants). Hence the actual function that computes predictions explicitly uses the sensitive attributes (and etc). You can see this in the math formulas in that paper which I replaced by \"...\" in my quote. The function explicitly takes R and G as inputs.\n\nYour argument from the real data experiment was that it showed an example where some variables are not descendants of the sensitive attribute. You cannot reach that conclusion based on the output of your algorithm alone, because, as you know, the algorithm requires additional modeling assumptions (e.g. no unmeasured confounding) for its output to have any guarantee.", " I said \"SCM discovery\" merely as the general category of methods I would describe this fitting under, with identifying ancestor relations as a special case enabled by some additional assumptions (e.g. the ancestral closure assumption of a sensitive attribute)\n\nPlease do not make the discussion tedious by objecting over terminology. I have already dedicated more time to this discussion than I believe is usually expected of reviewers, so I prefer to keep it focused and reach a conclusion.", " > I believe the value in the current paper is mainly as a fairly general SCM discovery algorithm. The algorithm relies on an ancestral closure property which some causal fairness methods assume. Hence the paper motivates this algorithm as an application to counterfactual fairness, as indicated in the title of the paper. I believe this is a poor motivation/application, because the version of fairness it focuses on is too narrow and unrealistic,…\n\n**Please note that our method is not a SEM discovery method. Instead, we rely on the output of the current causal discovery methods to identify ancestor relations.** I do not think we need to answer your following points given that your first point is already wrong.\n", " 1. Please notice that our FairRelax method also uses possible descendants to make prediction.\n\n2. We believe we are correct about the Levels 2 and 3 in [Kusner et al., 2017].\n> We estimate the error terms by first fitting two models that each use race and sex to individually predict GPA and LSAT. We then compute the residuals ... We use these residual estimates ... to predict FYA. We call this Fair Add.\n\n Please notice that the error terms in level 3 [Kusner et al., 2017] used to make prediction are **non-descendants and also independent of the race and sex** , as mentioned in section 5,\n > In Level 3, we model GPA, LSAT, and FYA as continuous variables with additive error terms **independent of race and sex** (that may in turn be correlated with one-another). \n\n3. > the real data experiments cannot be used as a proof because we do not know a ground truth for comparison.\n\n Since our paper is not intended to test the causal discovery algorithm, we do not need the ground-truth causal graph. Besides, the paper [Kusner et al., 2017] is based on an assumed causal graph, without the ground-truth graph as well. If the causal discovery algorithm is trustable, our method is valid in practice.\n", " I believe the value in the current paper is mainly as a fairly general SCM discovery algorithm. The algorithm relies on an ancestral closure property which some causal fairness methods assume. Hence the paper motivates this algorithm as an application to counterfactual fairness, as indicated in the title of the paper. I believe this is a poor motivation/application, because the version of fairness it focuses on is too narrow and unrealistic, namely the sufficient--but not necessary--condition of Lemma 1 in [Kusner et al., 2017] regarding using only non-descendants of the sensitive attributes. I believe the paper would be stronger if it were framed as a more general purpose causal discovery algorithm and not specifically as a method for counterfactual fairness.\n\nThere was much discussion about my contention that the [Kusner et al., 2017] Lemma 1 version of fairness is too narrow and unrealistic. I have replied elsewhere (in the thread of responses to my review) in more detail to the authors most recent points about this above. The most important takeaways here should be that (1) the real data experiments cannot be used as a proof because we do not know a ground truth for comparison, and (2) the authors are incorrect about the Levels 2 and 3 in [Kusner et al., 2017], for example as that paper mentions in an application:\n\n> We estimate the error terms by first fitting two models that each **use race and sex** to individually predict GPA and LSAT. We then compute the residuals ... We use these residual estimates ... to predict FYA. We call this Fair Add\n\nI maintain that only using non-descendants of sensitive attributes is a narrow and unrealistic version of fairness, and hence that it is a poor (and unnecessary) choice for the title and main framing of the current paper.\n\n", " I don't wish to quote and respond line by line trying to challenge every detail, that is counter productive and loses sight of the big picture. As a reviewer I am providing my judgment about the work, so I begin by reiterating my central message:\n\n> **I see this work in its current stage as a classic \"solution in search of a problem,\" and the application to counterfactual fairness (and *specifically the version using an unrealistic sufficient condition*) is not a good enough problem** for me to believe the contribution is significant enough for this conference.\n\nI also want to reiterate the positive parts of my judgment that I think this contribution could be valuable for some *other* applications. However, the paper is framed entirely around counterfactual fairness, such that *even of the title of the paper does not indicate to potential readers that it contains a more general purpose causal discovery algorithm!* I would increase my rating if the authors changed this framing, but they have argued against that.\n\nIt is not my responsibility to convince the authors that decades of social science research cannot simply be dismissed as follows:\n\n> the Wikipedia link is not convincing\n\nThe link contains references to dozens of papers for each of the topics of education, housing, health, wealth, etc. The professional necessity of publishing an algorithm in a computer science conference does not justify simply dismissing other entire fields of study.\n\nI linked to that wikipedia article only as an example, but I would predict that if we choose almost any machine learning task where people care about \"fairness\" with respect to any \"sensitive attribute,\" and if we go do a literature review on the social science related to that sensitive attribute, we will find many empirical studies establishing relationships between that attribute and causal pathways to the other observed variables we want to use. As the [Kusner et al., 2017] paper points out, we should care about such \"domain knowledge\":\n\n> Level 2. Postulate background latent variables that act as non-deterministic causes of observable variables, based on explicit domain knowledge...\n\nAnd as some of the same authors wrote elsewhere https://www.nature.com/articles/d41586-020-00274-3\n\n> Researchers in statistics and machine learning need to know more about the causes of unfairness in society. They should work closely with those in disciplines such as law, social sciences and the humanities.\n\nThe same authors have written here https://arxiv.org/abs/1805.05859 that\n\n> Despite its desirable features, **there are major prices to be paid** by avoiding structural equations. For technical reasons, enforcing such constraints require throwing away information from any descendant of A that is judged to be on an “unfair path” from A to Y .\n\nMy point is that those authors are clearly not arguing for staying at what they call \"Level 1\" as some kind of default, standard approach. They have written multiple times about the importance of doing more explicit modeling that incorporates domain knowledge, i.e. levels 2+. So let's return to those levels and your claim regarding them:\n\n> Two other levels of assumptions cannot depend on descendants as well, which is discussed in section 4.2 and illustrated explicitly in Figure 2. The latent variable in level 2 and the error term in level 3 are both non-descendants of the sensitive attributes. \n\nThis is a misunderstanding. It is not generally possible to do Levels 2 and 3 without using descendants of A, because these descendants may need to be used to estimate the latent factors. For example, see where they use Level 3 in their section 5:\n\n> We estimate the error terms by first fitting two models that each **use race and sex** to individually predict GPA and LSAT. We then compute the residuals ... We use these residual estimates ... to predict FYA. We call this Fair Add\n\nSo their Level 3 fair predictor is a function that explicitly uses the sensitive attributes and their descendants.\n\nFinally, I initially did not respond to the point below because I wanted to keep my discussion focused on the key reasons for my judgment, but since it is now being reiterated I will respond.\n\n> Furthermore, how would you interpret the real data experiment in our paper?\n\nIt is irrelevant because (1) we do not know the \"ground truth\" in this data to compare the results to, and (2) standard assumptions like no selection bias and no confounding will almost never hold on any real dataset.", " Dear AC and reviewers,\n\nThank you very much for reviewing our paper. After several rounds of discussions, there is one remaining concern from reviewer UUSP. We briefly summarize the concern and our response below. \n\nReviewer UUSP's concern is mainly about the practicability of using non-descendants of sensitive attributes for fair prediction. The concern is based on the following two points, which are not true according to our understanding. \n\nFirst, reviewer UUSP believes that sensitive attributes should causally influence every variable of interest, such that it is impossible to find non-descendants. However, reviewer UUSP did not provide strong scientific support for this claim. The provided wiki page is not convincing, and the page is only about “race”. Moreover, in our real data experiments, we have clearly shown that only 4 out of 30 features are descendants of the sensitive attribute “sex”.\n\nSecond, reviewer UUSP misunderstands the seminal counterfactual fairness paper [Kusner et al., 2017]. Reviewer UUSP does not believe the [Kusner et al., 2017] paper intended to establish its Lemma 1 as a standard or default approach and thinks it encourages to use descendants. However, the fact is that Lemma 1 in [Kusner et al., 2017] is indeed the standard of their methods. All three levels of assumptions in [Kusner et al., 2017] are based on its Lemma 1. Aside from the level 1 which our work builds on, two other levels of assumptions cannot depend on descendants as well, which is discussed in section 4.2 and illustrated explicitly in Figure 2. The latent variable in level 2 and the error term in level 3 used for prediction are both non-descendants of the sensitive attributes.\n\nBesides, it is obvious that the authors of [Kusner et al., 2017] do not encourage to use descendants in prediction, as the explanation of Lemma 1 in section 3.2,\n> This does not exclude using a descendant W of A as a possible input to $\\hat{Y}$. **However, this will only be possible in the case where the overall dependence of $\\hat{Y}$ on A disappears, which will not happen in general.** Hence, Lemma 1 provides the most straightforward way to achieve counterfactual fairness. \n\nNote that **\"which will not happen in general\"** is contradictory to reviewer UUPS’s viewpoint, as reviewer UUSP thinks the disappearance of the overall dependence of $\\hat{Y}$ on A (cancel out) is common. While this might be a point worth more debating, it is not the focus of our paper. We do not think the possible debate should hurt our contribution in any way.\n\nWe sincerely hope you could discuss on this point during the AC-Reviewer discussion phase.\n\nMany thanks,\n\nAuthors of paper 1219", " Thank you for your quick reply.\n\n1. First of all, without any offense, we would like to say that the Wikipedia link is not convincing, not to mention that it is only about race and lists only limited number of variables. Furthermore, how would you interpret the real data experiment in our paper?\n\n2.\n\n> “I do not believe the [Kusner et al., 2017] paper intended to establish its Lemma 1 as a standard or default approach.” “They go on to give 2 other levels of assumptions which enable the construction of predictors that can depend on descendants (provided the dependence cancels out).”\n\nWe respectively disagree with this point. The fact is that Lemma 1 in [Kusner et al., 2017] is indeed the standard of their methods. All three levels of assumptions in [Kusner et al., 2017] are based on its Lemma 1. Two other levels of assumptions cannot depend on descendants as well, which is discussed in section 4.2 and illustrated explicitly in Figure 2. The latent variable in level 2 and the error term in level 3 are both non-descendants of the sensitive attributes. \n\nBesides, it is obvious that the authors do not encourage to use descendants in prediction, as the explanation of Lemma 1 in section 3.2, \n> This does not exclude using a descendant W of A as a possible input to $\\hat{Y}$. However, **this will only be possible** in the case where the overall dependence of $\\hat{Y}$ on A disappears, **which will not happen in general.** Hence, Lemma 1 provides the most straightforward way to achieve counterfactual fairness.\n\n3.\n\n> Level 1. Build $\\hat{Y}$ using only the observable non-descendants of A. This only requires partial causal ordering and no further causal assumptions, but **in many problems** there will be few, if any, observables which are not descendants of protected demographic factors.\n\nFirst, note that it is **\"in many problems\"** here, not \"most\". There is still many problems that can have quite a few non-descendants, and our real data is an example. Second, we would like to emphasize that the author's intention in section 4.2 is to use latent variable when there are few observables, but never mention or encourage to use descendants in prediction.", " Thanks for strengthening our paper, and we appreciate your efforts!\n\nAuthors of 1219", " First I will reply regarding the rating, then move to the other important points. To quote from the reviewer guidelines:\n\n> 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.\n\n> 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.\n\nNote that the description for 3 begins with \"For instance,\" hence it does not mean the listed conditions are exhaustive. Also note that the borderline reject/accept descriptions ask us to \"Please use sparingly.\" I would consider increasing my rating to one of these borderline ratings, despite my second point in the previous comment (about incremental technical progress), if the authors had addressed my other questions. But I am not satisfied with the responses to my other questions, as should be clear from this continued discussion, which I turn to next.\n\nAbout sensitive attributes being causally related to everything:\n\n> \"sensitive attributes are likely causally influential on every measured variable of interest\" is purely subjective without any scientific evidence\n\nOn the contrary, this is one of the most persistent findings in social and health sciences. Providing one reference for it would almost be absurd because it requires something more like citing several entire disciplines. Taking race as an example, there are many references here https://en.wikipedia.org/wiki/Racial_inequality_in_the_United_States regarding the effects of racism on housing, education, health care, employment, wealth, and policing, and each of these things in turn (housing, education, health care, employment, etc) is also causally important for almost every other outcome we would study (and to each other).\n\nAbout the focus on using non-descendants:\n\nI do not believe the [Kusner et al., 2017] paper intended to establish its Lemma 1 as a standard or default approach. For one thing, they named this result a lemma, and for another see their section 4.2:\n\n> Level 1. Build $\\hat Y$ using only the observable non-descendants of A. This only requires partial causal ordering and no further causal assumptions, **but in many problems there will be few, *if any*, observables which are not descendants of protected demographic factors.**\n\nI added the emphasis in this quote. They go on to give 2 other levels of assumptions which enable the construction of predictors that can depend on descendants (provided the dependence cancels out).\n\nConcluding:\n\nI see this work in its current stage as a classic \"solution in search of a problem,\" and the application to counterfactual fairness (and specifically the version using an unrealistic sufficient condition) is not a good enough problem for me to believe the contribution is significant enough for this conference. As I mentioned earlier, if this issue had been addressed I would consider raising my rating.", " Thank you for your feedback.\n\nFirst, without any offense, we would like to point out that your statement \"sensitive attributes are likely causally influential on every measured variable of interest\" is purely subjective without any scientific evidence. If you insist your point is correct, we would appreciate it if you could provide us a real example where sensitive attributes causally influence every variable. Here we can provide you a counterexample. In the Student Performance Data Set used in our paper, only 4 out of 30 features are descendants of the sensitive attribute ‘sex’ according to the causal discovery algorithms. While the causal discovery algorithms could make mistakes, the results are surely stronger than your subjective judgement. \n\nSecond, even if your subjective judgement is correct, the limitation of finding non-descendants originates from the seminal counterfactual fairness paper [Kusner et al., 2017]. I do not think our work should be blamed for not fixing this issue. Our work makes counterfactual fairness more practical by dropping the requirement of known causal graphs. This is a significant advancement, which has been extensively discussed in our paper and has been appreciated by the other two reviewers. This is also how research has progressed, with each work extending existing works toward certain direction, and ultimately the technology can be applied to a wider range of fields.\n\nThird, thanks for discovering that our technical contribution of identifiability of causal relations in MPDAG has potentially wider applications. However, again, we do not think we should be blamed for only applying it to counterfactual fairness. The identifiability of causal relations in MPDAG is strongly motivated by achieving more practical counterfactual fairness. In fact, many mathematical discoveries and techniques are designed for specific problems and the abstracted idea is then used for other problems. For example, the Monte Carlo methods were originally developed for particle physics and nowadays people in statistics and machine learning use this technique extensively.\n\nLast but not least, even if you subjectively dislike our work, we cannot understand why you give us a rating 3, which should be used for \"a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations\". We understand and appreciate your effort to keep NeurIPS high standard, but we would appreciate more if you can base your assessment on solid evidence.\n\nMany thanks,\n\nAuthors of paper 1219", " I appreciate the authors' careful replies and I am glad they found some of my comments useful. However, I am sorry to say this exchange did not influence me to increase my initial rating, and I still have significant concerns with the paper.\n\nFirst, my previous points about the weak connection with fairness are unaddressed. The paper would be significantly stronger if it were not framed as specific to counterfactual fairness. There are two reasons for this, both of which I mentioned before:\n\n(1) The focus on using non-descendants is based entirely on a sufficient condition which is not necessary, and--I will emphasize--is extremely unlikely to hold in any real world applications where fairness is important. This is because sensitive attributes are likely causally influential on every measured variable of interest. Histories of economic, social, and cultural disadvantages/oppression mean we are unlikely to find any predictors of some important outcome which have not also been influenced by that unfair history. I think this is a realistic version and it's what I will stick with, but to make the point more clear in a more critical version: the contribution of this paper to fairness is an algorithm for finding variables satisfying some condition which essentially no variables will satisfy in real applications.\n\n(2) The ancestral closure assumption used in counterfactual fairness is a structural assumption that could reasonably extend to other applications. Hence, the contribution of the current paper could be applicable to a much wider scope. The methodological results here should be presented in the appropriate generality and abstraction, and doing so would help address the issue (1) above because then its applicability would not rely entirely on one setting where its usefulness may be too unrealistic.\n\nSecond, even if we suppose the previous issue has been addressed, I retain my initial assessment that the paper provides relatively incremental technical progress on previous work. I do think the method could be useful and have some valuable applications, but I do not think it meets the standards of originality and impact expected of papers in such a highly selective conference as NeurIPS. (I hope the authors will not find this assessment discouraging because it is important to do valuable work like this whether or not it gets accepted at any one particular venue)", " Dear Reviewer UUSP,\n\nThanks for reviewing our paper. This is the last day for the Author-Reviewer Discussion period. We understand that your are very busy and sorry to bother you again. We wonder if you need any further clarification. We would greatly appreciate it if you'd like to have any discussion with us. Thanks for your time!\n\nMany thanks,\n\nAuthors of 1219", " Thanks for answering my questions. I appreciate that authors have answered all questions. I am raising my rating to Accept.\n\n", " We feel fortunate that you are interested in the technical details of our work. Thanks again for strengthening our paper, and we greatly appreciate your efforts!\n\nAuthors of 1219", " Dear Area chair,\n\nThanks for handling our submission. Only one day is left in the Author-Reviewer Discussion period, yet we did not receive any response to our rebuttal from Reviewer v6YC or Reviewer UUSP. We understand that reviewers are very busy. We wonder if they need further clarification. We would greatly appreciate it if you could follow up on this issue.\n\nMany thanks,\n\nAuthors of 1219", " Dear Reviewer UUSP,\n\nWe appreciate your comments and time! Half of the author-reviewer discussion time has passed, and we wonder whether you need further clarification. We look forward to hearing from you, and we are happy to discuss anything with you about our work!\n\nBest regards,\n\nAuthors of 1219", " Thank you for your reply. All my concerns have been addressed and the revised Lemma B.2 is correct to me now.", " Dear Reviewer Hhvs,\n\nWe appreciate your comments and time! We have provided answers to your questions and revised the paper following your suggestions. Would you mind checking it and confirming if you have further questions?\n\nBest Regards,\n\nAuthors of 1219", " Dear Reviewer UUSP,\n\nWe appreciate your comments and time! We have provided answers to your questions and revised the paper following your suggestions. Would you mind checking it and confirming if you have further questions?\n\nBest Regards,\n\nAuthors of 1219", " Dear Reviewer v6YC,\n\nWe appreciate your comments and time! We have provided answers to your questions and revised the paper following your suggestions. Would you mind checking it and confirming if you have further questions?\n\nBest Regards,\n\nAuthors of 1219", " Thank you for your constructive comments. We address your comments point by point as follows. We have also tried to revise the manuscript according to your suggestions.\n\n> W1 The critical assumption is that the sensitive attribute does not have any ancestors. Another crucial assumption for the analysis is the absence of confounders. Please discuss the realistic implications of these assumptions and the challenges because of these.\n\nThanks for this insightful question.\n\n- Response to Assumption on \"sensitive attribute does not have any ancestors\". First, we would like to mention that we have provided a general solution without this assumption (section 5.1 General case). In section 5.2, we provided the solution under this assumption, and the justifications can be found in L245-L249 in the original manuscript. To sum up, there are many situations where the protected attribute like gender and age cannot be caused by other features. In this case, we can obtain the same results as if we know the full DAG, which is very interesting and useful in situations where the assumption holds.\n\n- Response to Assumption of no confounders. We have stated the reason (Line 357-359 in the original manuscript) in the Conclusion and discussion part that ``because the causal discovery algorithms themselves will not work well in such challenging scenarios.'' To sum up, causal discovery is a challenging ill-posed problem, and some assumptions, e.g., no confounders, are commonly adopted for the initial development of a practical approach, e.g., PC and GES. It remains an active research area to relax this assumption which has attracted much attention.\n\n- In addition, we have followed the prior work on establishing counterfactual fairness [66, 4, 7 , 57 ] which also assumes no selection bias and confounders. Though our method cannot handle confounders, we are the first to consider the problem of unknown or partially known DAGs (L64-67 in the original manuscript), which makes a steady step toward more practical counterfactual fairness.\n\n- Finally, we have noticed a recent work \"Selection, Ignorability and Challenges with Causal Fairness\" by Fawkes et al.[1]. According to their discoveries, with the two critical assumptions on a causal DAG, a counterfactual fairness measure degenerates to demographic parity, which is discussed extensively by the authors [1]. In this situation, it is considerably more difficult to provide a clear causal interpretation without knowing the causal DAG. This may give rise to an exciting topic to research in future work. Given the reviewer's concern, we have reflected the current debate surrounding these conditions in the 'Conclusion and discussions' part of the revision.\n\nHope this addresses your concern and please kindly let us know if you have further concerns.\n\n[1] Fawkes, Jake, Robin Evans, and Dino Sejdinovic. \"Selection, Ignorability and Challenges With Causal Fairness.\" First Conference on Causal Learning and Reasoning. 2021.\n\n> W2 It is unclear how noise in MPDAG would affect the quality of this approach. Please discuss this in light of cases where it is learned from data.\n\nThanks for raising this concern. At this point, we do not know, theoretically, how noise in MPDAG would affect the performance of our FairRelax when the graph is learned from data. However, we have endeavoured to explore this experimentally in Appendix F.8 in the original manuscript, but we feel sorry for missing the reference in the main paper. After comparing Table 1 and Table 5 in Appendix F.8, we concluded that \"there is not much difference on fairness and prediction performance on FairRelax model in two cases\". Please refer to Appendix F.8 for the detailed numeric results. Besides, we have also stated in the second footnote in page 7 that \"Given a sufficiently large sample size, current causal discovery algorithms can recover the CPDAG with high accuracy on the simulated data [19].\" Again, we are sorry for missing the reference to Appendix F.8 and have added it in the revision.\n\n> W3 Experiments on real data do not show how their approach is able to ensure fairness of the trained classifier.\n\nOur approach is the \"FairRelax\" model. In Line 341 in the original submission, we have stated that \"The results are reported in Table 2. Since under the root node assumption, there is no possible descendants of the sensitive attribute, the model \"Fair\" and \"FairRelax\" give the same RMSE result and both of them achieve counterfactual fairness at the cost of slight accuracy decrease.\"", " > W4 Related Work: There is no discussion of these techniques as compared to this recent paper which does not assume knowledge of causal graph and performs feature selection. [1] Causal Feature Selection for Algorithmic Fairness. S Galhotra, K Shanmugam, P Sattigeri, KR Varshney. SIGMOD 2022.\n\nThanks for reminding us about this recent work. We found it was published in June 2022, after the NeurIPS submission date. By performing conditional independence tests between different feature subsets in the context of data integration, Galhotra et al. [1] studied the problem of fair feature selection without assuming access to the underlying graph. However, they did so from the perspective of interventional fairness at the subpopulation level as opposed to the individual-level counterfactual fairness in our work. We have added a discussion on this recent paper in the revision according to your kind suggestion.\n\n> W5 Algorithm complexity seems quite high. Please discuss running time on real datasets.\n\nThanks for raising this concern. In the general case, the complexity in the worst case is $\\mathcal{O}(|sib(S,\\mathcal{G})+ch(S,\\mathcal{G})|\\*|E(\\mathcal{G})|\\*|V(\\mathcal{G})|)$ (L237 in the first manuscript), which scales linearly w.r.t edges and nodes. \nUnder the root node assumption, the computational complexity further reduces to $\\mathcal{O}(|V|+|E|)$ (L266 in the first manuscript), where $|V|$ is the number of nodes and $|E|$ is the number of edges in $\\mathcal{G}$. On the real dataset with 30 nodes and 17 edges, our algorithm running time is 0.6499 ms in a Macbook Pro with 2.7 GHz Dual-Core Intel Core i5.\n\n> Questions: Please clarify the connection to related work and how sensitive this approach is with respect to noise in MPDAG.\n\nThanks for your kind comments. We have clarified the connection to the related work in the Introduction part, specifically on the causal fairness and causal discovery methods on Line 24-48 in the original manuscript. Besides, we have provided a summary of existing results on ancestral relations identifiability in Appendix G and referenced in Line 63 in the main of the original manuscript. As for the sensitivity of this approach with respect to noise in MPDAG, see our response to W2. Please let us know if you have any further concerns, and we are encouraged to have a discussion.", " Thank you for your helpful comments. We address your comments point by point as follows. We have also tried our best to revise the manuscript according to your suggestions.\n\n> Weakness: Many of the results and basically all of the illustrative figures are in appendices. I think the paper could be improved by reorganization that includes showing examples in the main text, like figures 2, 3, and 6 which are currently in the appendices. As it is now, a reader who is not already familiar with PDAG, CPDAG, MPDAG will not know how to visualize these unless they read the supplementary material, and will struggle to understand the definitions and lemmas.\n\nThanks for your nice suggestion. Given that NeurIPS is a technical conference with a page limit for the submissions, we have to assume that the readers already have some background in causality so that we can focus on presenting our new results. In case some readers do not have a technical background, we have tried to include more preliminaries in Appendix A. We will move some of the suggested figures to the main paper in the camera ready version as one additional page will be allowed. The reason we put Figure 3 in the appendix is that it is used in the proof procedure of Lemma B.1.\n\n> Weakness: It is possible the technical contribution of the current paper is a relatively small increment, extending Lemma 4.4 from previous work on CPDAGs. Since much of the details is in the appendix I did not review it all carefully enough to compare with the previous papers. This paper could be strengthened by clarifying which parts are new contributions. Hope this addresses your concern and please kindly let us know if you have further concerns.\n\nAgain, sorry for not being able to put the proof details in the main text. Below we provide a summary of technical novelties compared to the work on CPDAG. CPDAG is a special case of MPDAG (Line 110 and 676 in the original manuscript), and MPDAG can represent more information regarding the causal relationships than CPDAGs. However, this additional information has not been fully exploited in practice due to the fact that causal methods that are applicable to CPDAGs are not directly applicable to general MPDAGs. For example, the CPDAGs can have partially directed cycles while MPDAGs cannot. Lemma 4.4 is therefore not a straightforward extension from CPDAG. To establish Lemma 4.4, we introduced two new technical lemmas (Lemma B.1 and Lemma B.2) exploring the properties of the general MPDAGs. \n\nAs the novelty lies in the proof techniques, we have attempted to explain the key difference and difficulty in the proof of the desired Theorem 4.5 and the one for CPDAGs on Line 676-680 of the appendix in the original manuscript. To better explain our technical contributions in identifying ancestral relations in MPDAGs in the main, we have added a discussion on difficulties and differences in working with MPDAGs, as opposed to CPDAGs in the revision. Hope this addresses your concern and please kindly let us know if you have further concerns.\n\n > Q1: Counterfactual fairness is possible even with definite descendants provided counterfactual changes “cancel out” in the prediction. The current paper gives an incorrect definition on lines 51-53 which is based on a sufficient (but not necessary) condition for counterfactual fairness.\n\nNice thinking! Yes, it is possible with definite descendants provided counterfactual changes “cancel out” in the prediction. Our statement of counterfactual fairness on Line 51-53 exactly follows the pioneering counterfactual fairness work [Kusner et al., 2017, Lemma 1], which is indeed based on a sufficient (but not necessary) condition for counterfactual fairness. However, we would like to argue that the sufficient condition is reasonable, because the \"cancel out\" situation can only happen for a special set of model parameters. Lemma 1 in [Kusner et al., 2017] is more strict in the sense that fairness is a property of the graph and works for all possible parameters. Extension of this implication of the counterfactual fairness notion is definitely an interesting direction, which is beyond the scope of our paper. ", " > Q2: Should the unfairness of Oracle and Fair methods in the simulations be exactly zero, even in samples? I would think it will be close to zero in each realization and very close to zero on average, with small standard errors- but not necessarily exactly zero.\n\nYes, the unfairness of \"Oracle\" and \"Fair\" methods in the simulations is exactly zero for each realisation. This is because the \"Oracle\" and \"Fair\" models make predictions with non-descendants given the ground-truth DAG and definite non-descendants of the sensitive attribute in an MPDAG, respectively. These attributes have the exact same value in the counterfactual data as they do in the observational data, as they are unaffected by the sensitive attribute. Therefore, both the \"Oracle\" and \"Fair\" models will yield the same prediction for each individual over the observational and counterfactual data. We have provided the source code in the supplementary, and we would appreciate it if you could try out the code to verify the correctness of the simulations.\n\n> Q3: Most of the methods proposed in this paper do not seem to be necessarily connected to fairness. Presumably, it is of more general interest to determine the ancestral relationships in an MPDAG. Would it make more sense to reframe the paper as a general method within the SCM literature, and its application to fairness just one of the motivating examples? This may also strengthen the paper since, as I point out in Q1, the connection of the current methods to fairness is based on a sufficient condition which may not be necessary, and hence it may not be a strong enough connection for fairness to be a main focus of the paper.\n\nThanks for your kind suggestion. \n\n- We agree that the identifiability of ancestral relations in an MPADAG could have other potential applications. We had the same idea as you when we finished the first draft, but we finally gave up this idea because we could not think of other applications with real impacts. \n\n- Here we would like to briefly summarise the motivation of our work (Line 37-63 in the first manuscript) to clarify our paper organisation further. The seminal work [Kusner et al., 2017] our work built on assumes that the DAG is known and thus one can easily determine the ancestral relations in a DAG to achieve counterfactual fairness. However, when the DAG is unknown, the causal discovery algorithms with background knowledge can only give us MPDAGs in most situations. To achieve counterfactual fairness in this situation, one naturally needs algorithms to determine the ancestral relations in MPDAGs. This is why we put much effort into determining the ancestral relations in MPDAGs. Moreover, as we have illustrated in section 5.2, the background knowledge of sensitive attributes as root nodes is very specific to the counterfactual fairness problem.\n\n- Taking into account your suggestions, we have added a short discussion in the conclusion part to remind the readers that our ancestral relation determining procedure in MPDAGs may have more applications. As for your concern on the sufficient condition of the counterfactual fairness, see our response to Q1. Hope this addresses your concern and please kindly let us know if you have further concerns.\n\n> Minor issue: In section 5.1, isn’t the inclusion of “(possibly small)” a bit of wishful thinking? The violation of fairness could also possibly be large. Without knowledge of a specific context to justify either possibility perhaps this speculation should be omitted.\n\nWe have removed \"(possibly small)\" to make the statement more rigorous. As you have nicely noticed in Q1, if the parameters of descendants happen to cancel out, the violation would be small. Moreover, as we did not use the definite descendants, the chance of violation would be even smaller. This is also related to the accuracy-fairness trade-off as we have referenced in the Experiment section (Line 321) and discussed in Appendix H in the original manuscript.\n\n> Minor issue: citing preprints instead of published versions, e.g. of [21, 23, 56]\n\nThanks for pointing this out. We have fixed this issue in the revision.", " Thank you for your constructive comments. We address your comments point by point as follows. We have also revised the manuscript according to your suggestions.\n\n> The introduction of b-critical set (Definition 4.2) is not necessary. Since a chordless b-possibly causal path from $S$ to $T$ definitely has no chord, it degenerates to a partially directed path. Therefore, there is no essential difference between the definition of b-critical set and the definition of critical set defined by Fang and He (2020), except that the latter is defined with respect to CPDAGs.\n\nThank you for reviewing so carefully and pointing this out. We agree that a chordless b-possibly causal path from $S$ to $T$ degenerates to a chordless possibly causal path and there is no essential difference between the definition of the b-critical set and the definition of the critical set defined by Fang and He (2020). Hence, we have removed the introduction of the b-critical set and replaced all 'b-critical set' by 'critical set' in the revision.\n\n> It seems to me that Lemma B.2 is flawed. Lemma B.2 states that if $\\mathbf{R} \\subseteq{sib(X,\\mathcal{G})}$ has a subset that induces a complete subgraph, then there is a DAG $\\mathcal{D} \\in [\\mathcal{G}]$ such that $pa(X,\\mathcal{D})= pa(X,\\mathcal{G}) \\cup \\mathbf{R}$. If Lemma B.2 is correct, then for any $\\mathbf{R} \\subseteq{sib(X,\\mathcal{G})}$ that induces a complete subgraph, there is a DAG $\\mathcal{D} \\in [\\mathcal{G}]$ such that $pa(X,\\mathcal{D})= pa(X,\\mathcal{G}) \\cup \\mathbf{R}$. However, this is impossible. For example, consider a complete graph with 4 variables: $X$, $S\\_1$, $S\\_2$, $C$. Every edge except $C \\rightarrow S\\_1$ is undirected. Let $R=\\lbrace S_1,S_2 \\rbrace$. $\\mathbf{R}$ induces a complete subgraph, but orienting $\\mathbf{R} \\rightarrow X$ and $X \\rightarrow C$ will lead to a directed cycle. \n\nThanks for reading the supplementary material and being so careful. Yes, you are right. The previous statement for Lemma B.2 was easy to misunderstand, so we have rephrased Lemma B.2 in the revision. Lemma B.2 intended to say that if there is a set $\\mathbf{H} \\subseteq{sib(X,\\mathcal{G})}$ inducing a complete subgraph, then there exists a superset $\\mathbf{R}$ of $\\mathbf{H}$ that $\\mathbf{H} \\subseteq{\\mathbf{R}} \\subseteq {sib(X,\\mathcal{G})}$ such that there is a DAG $\\mathcal{D} \\in [\\mathcal{G}]$ such that $pa(X,\\mathcal{D})= pa(X,\\mathcal{G}) \\cup \\mathbf{R}$. Therefore, in your example, let $\\mathbf{H}= \\lbrace S_1,S_2 \\rbrace$ and $\\mathbf{R}=\\lbrace S_1,S_2,C \\rbrace$. Orienting $\\mathbf{H} \\rightarrow X$ and $X \\rightarrow C$ will lead to a directed cycle, but there still exists a superset $\\mathbf{R}$ of $\\mathbf{H}$ that $\\mathbf{H} \\subseteq \\mathbf{R} \\subseteq {sib(X,\\mathcal{G})}$ such that orienting $\\mathbf{R} \\rightarrow X$ and $X \\rightarrow \\emptyset$ is reasonable, since it will not lead to a directed cycle or a collider, which also means there is a DAG $\\mathcal{D} \\in [\\mathcal{G}]$ such that $pa(X,\\mathcal{D})= pa(X,\\mathcal{G}) \\cup \\mathbf{R}$ according to Theorem 1 in Fang and He (2020).\n\nThe rephrased Lemma B.2 is as follows:\n\n- In an MPDAG $\\mathcal{G}$, for any vertex $X$, there exists $\\mathbf{H} \\subseteq{sib(X,\\mathcal{G})}$ that induces a complete subgraph of $\\mathcal{G}$ if and only if there exists some $\\mathbf{R}$ that $\\mathbf{H} \\subseteq \\mathbf{R} \\subseteq{sib(X,\\mathcal{G})}$, such that there is a DAG $\\mathcal{D} \\in [\\mathcal{G}]$ that $pa(X,\\mathcal{D})=\\mathbf{R} \\cup pa(X,\\mathcal{G})$ and $ch(X,\\mathcal{D})=sib(X,\\mathcal{G}) \\cup ch(X, \\mathcal{G}) \\backslash \\mathbf{R}$.\n\nHope this addresses your concern and please kindly let us know if you have further concern.\n> A minor point. Some cited papers have been published. Please revise the bib to make sure that their latest versions are cited.\n\nThanks for pointing this out. We have fixed this issue in the revision.", " We appreciate all reviewers' work and friendly comments. We are encouraged that they found our paper to be well-motivated (Reviewer Hhvs), important (Reviewer v6YC), useful (Reviewer UUSP), relevant, new, and interesting (Reviewer Hhvs). Moreover, we are grateful that reviewers recognised our contributions both on technical and application parts (Reviewer Hhvs). Reviewers also found that our simulation uses fairly high dimensional DAGs compared to the common examples considered in counterfactual fairness (Reviewer UUSP). We also appreciated that reviewers found our paper is easy to follow (Reviewer v6YC), well-written and well-organized (Reviewer Hhvs).\n\nThe modification in the manuscript is summarised as follows:\n- We tackle questions on the technical part raised by Reviewer Hhvs.\n- We try our best to clarify the technical contributions in a high level in the main paper.\n- We modify some introduction and discussion to respond to reviewers' comments and move Algorithm 1 in the original manuscript from the main text to the appendix. Some minor issues are revised accordingly.\n\nWe also appreciate reviewers pointing out our weaknesses. We address their comments point by point and try our best to update the manuscript accordingly. The modification part is coloured in blue. Hope our response addresses the reviewers' concerns.", " This paper proposes a method for identifying ancestral relations on MPDAGs, which can be learned from observational data and domain knowledge. This is then exploited for demonstrating good performance in achieving counterfactual fairness, as knowing the causal graph is important for the application of counterfactual fairness methods. This proposed approach is then evaluated on both the synthetic data, as well as the UCI student performance data set. The authors don't discuss how representative this data is, of the types of real-world problems where their method could be applied, nor whether the particular choice of benchmark perpetuates biases in the utilisation of Western data benchmarks in the development of fairness methods. That being said, given the scope of the paper, and the particularities of the method being proposed, it feels like these are meant to be illustrative examples of the method 'working as intended' rather than a deep empirical evaluation of its use across a number of domains. With that in mind, these are minor issues in this particular context.\n\nAs for the limitations, as the authors themselves state: \"Throughout this paper, we assume no selection bias and presence of confounders because the causal discovery algorithms themselves will not work well in such challenging scenarios\". This is reasonable in-context, as the proposed method has a fairly narrow focus and the authors wanted to highlight its benefits within that use case. However, this is -not- a reasonable assumption overall, for applications on real-world data. The authors do not overclaim and are communicating this limitation to the reader.\n\nYet, the paper would still benefit from the authors highlighting more just how prevalent these issues are, putting the applicability of the proposed approach in context; as well as discussing at a greater length what they see as the likely / necessary next steps to make such methods safely applicable in practice, in presence of confounders and selection bias.\n\nGiven that the authors focus on counterfactual fairness, they should also openly engage with the criticisms presented in \"The Use and Misuse of Counterfactuals in Ethical Machine Learning\" and flag those risks in the paper. The paper itself is on fairness, though I didn't spot a particular emphasis on the wider ethics issues stemming from the assumptions made in the authors' research or the ethical issues regarding the use of counterfactual fairness, or this proposed method, in practice - on real data with more realistic assumptions on both the categories involved and the causal graph. Given the narrow focus of the work, the ethical issues are ultimately minor, when considered in context.\n\nNevertheless, the paper would still benefit from a deeper and more honest account of how restrictive the presumed limitations of the proposed method are in practice, and to what extent these obstacles are practically possible to overcome. The paper would also benefit from a deeper engagement with well-known critiques of how counterfactual fairness methods tend to be pitched and used in practice, as given for example in \"The Use and Misuse of Counterfactuals in Ethical Machine Learning\". I would encourage the authors to extend the discussion of the limitations, as many readers would not have the required level of familiarity, and may erroneously believe that they can use the method in their own applied scenario, even in case when it may not be safely and ethically applicable. To mitigate such risks, some additional details should be provided.", " No ethical concerns. NA. None.", " The paper proposes a mechanism to perform fairness aware feature selection that achieve counterfactual fairness under certain assumptions. \n\n\n \nS1 Fairness aware feature selection is an important problem which is underexplored.\nS2 Problem setting that complete causal graph may not be available is important.\nS3 The paper is easy to follow.\n\n\nWeaknesses\nW1 The critical assumption is that the sensitive attribute does not have any ancestors. Another crucial assumption for the analysis is the absence of confounders. Please discuss the realistic implications of these assumptions and the challenges because of these.\n\nW2 It is unclear how noise in MPDAG would affect the quality of this approach. Please discuss this in light of cases where it is learned from data.\n\nW3 Experiments on real data do not show how their approach is able to ensure fairness of the trained classifier.\n\nW4 Related Work: There is no discussion of these techniques as compared to this recent paper which does not assume knowledge of causal graph and performs feature selection.\n[1] Causal Feature Selection for Algorithmic Fairness. S Galhotra, K Shanmugam, P Sattigeri, KR Varshney. SIGMOD 2022.\n\nW5 Algorithm complexity seems quite high. Please discuss running time on real datasets. Please clarify the connection to related work and how sensitive this approach is with respect to noise in MPDAG. The authors have clearly discussed their assumptions.", " This paper works within a structural causal modeling (SCM) framework for defining fairness of algorithms. Previous literature defining fairness this way assumes knowledge of the relevant SCM, which is a key limitation. The current paper relaxes this assumption to some specific contexts where the SCM is partially known, that is when a maximally partially directed acyclic graph (MPDAG) is available rather than a full directed acyclic graph (DAG). In the MPDAG framework a variable can be a definite descendent, definite non-descendent, or a possible descendent of other variables. This paper provides algorithms for determining which of these relationships holds for a given pair of variables and theorems to establish the correctness of the algorithms. The application to fairness involves considering which variables are definite or possible descendants of a sensitive attribute, and proposing corresponding definitions of fairness based on whether only definite non-descendants are used in the prediction (Fair) or possible descendants are also included (FairRelax) and only definite descendants are excluded. \n Strength: SCM approaches rely on very strong assumptions so it is useful to relax these.\n\nStrength: The simulation results use fairly high dimensional DAGs compared to the common examples considered in counterfactual fairness.\n\nWeakness: Many of the results and basically all of the illustrative figures are in appendices. I think the paper could be improved by reorganization that includes showing examples in the main text, like figures 2, 3, and 6 which are currently in the appendices. As it is now, a reader who is not already familiar with PDAG, CPDAG, MPDAG will not know how to visualize these unless they read the supplementary material, and will struggle to understand the definitions and lemmas. \n\nWeakness: It is possible the technical contribution of the current paper is a relatively small increment, extending Lemma 4.4 from previous work on CPDAGs. Since much of the details is in the appendix I did not review it all carefully enough to compare with the previous papers. This paper could be strengthened by clarifying which parts are new contributions. Q1: Counterfactual fairness is possible even with definite descendants provided counterfactual changes “cancel out” in the prediction. The current paper gives an incorrect definition on lines 51-53 which is based on a sufficient (but not necessary) condition for counterfactual fairness. \n\nQ2: Should the unfairness of Oracle and Fair methods in the simulations be exactly zero, even in samples? I would think it will be close to zero in each realization and very close to zero on average, with small standard errors- but not necessarily exactly zero.\n\nQ3: Most of the methods proposed in this paper do not seem to be necessarily connected to fairness. Presumably it is of more general interest to determine the ancestral relationships in an MPDAG. Would it make more sense to reframe the paper as a general method within the SCM literature, and its application to fairness just one of the motivating examples? This may also strengthen the paper since, as I point out in Q1, the connection of the current methods to fairness is based on a sufficient condition which may not be necessary, and hence it may not be a strong enough connection for fairness to be a main focus of the paper.\n\nMinor issue: In section 5.1, isn’t the inclusion of “(possibly small)” a bit of wishful thinking? The violation of fairness could also possibly be large. Without knowledge of a specific context to justify either possibility perhaps this speculation should be omitted.\n\nMinor issue: citing preprints instead of published versions, e.g. of [21, 23, 56]\n The weaknesses and questions above, as well as the limitations inherited by any SCM type of approach which include untestable assumptions. The current paper also acknowledges its assumptions of no confounding or selection bias, but it is reasonable to try to address that in future work rather than all at once.", " Given an MPDAG, the authors prove a method that can identify whether a variable is a definite descendant, definite non-descendant, or possible descendant of another variable. Building upon this result, the authors further study the problem of learning counterfactually fair models via selecting features. Overall, I think this paper is relevant, new, and interesting. The paper is well-written and well-organized. The motivation is clearly described and the contribution is also clear. However, there is a technical lemma that seems flawed. Please refer to the detailed questions below. 1. The introduction of b-critical set (Definition 4.2) is not necessary. Since a chordless b-possibly causal path from S to T definitely has no chord, it degenerates to a partially directed path. Therefore, there is no essential difference between the definition of b-critical set and the definition of critical set defined by Fang and He (2020), except that the latter is defined with respect to CPDAGs.\n\n2. It seems to me that Lemma B.2 is flawed. Lemma B.2 states that if $R\\subseteq sib(X, G)$ has a subset that induces a complete subgraph, then there is a DAG $D$ in $[G]$ such that $pa(X, D)=pa(X, G)\\cup R$. If Lemma B.2 is correct, then for any $R\\subseteq sib(X, G)$ that induces a complete subgraph, there is a DAG $D$ in $[G]$ such that $pa(X, D)=pa(X, G)\\cup R$. However, this is impossible. For example, consider a complete graph with 4 variables: $X, S_1, S_2, C$. Every edge except $C\\to S_1$ is undirected. Let $R=\\\\{S_1, S_2\\\\}$. $R$ induces a complete subgraph, but orienting $R\\to X$ and $X \\to C$ will lead to a directed cycle.\n\n3. A minor point. Some cited papers have been published. Please revise the bib to make sure that their latest versions are cited. \n The limitations of the paper have been addressed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "X04VQkrm_fk", "9HLwOOjuj_7", "7moC7oAvCtP", "7moC7oAvCtP", "zuKOiCzhUcK", "4wbAzBo3XVm", "nips_2022_9aLbntHz1Uq", "E955xneNcfi", "9iETKhzjKM8", "Pa6m_my4XGP", "7y40m6iw65k", "dG6JZ05zui", "xDaqcLe0za", "ZFyVul-ZnWt", "pfnAf1HSATk", "nips_2022_9aLbntHz1Uq", "xDaqcLe0za", "1yUao1HJSRW", "bCP__rlpsp", "xDaqcLe0za", "aj5YJA8cgkn", "aj5YJA8cgkn", "aj5YJA8cgkn", "xDaqcLe0za", "xDaqcLe0za", "bCP__rlpsp", "nips_2022_9aLbntHz1Uq", "nips_2022_9aLbntHz1Uq", "nips_2022_9aLbntHz1Uq", "nips_2022_9aLbntHz1Uq", "nips_2022_9aLbntHz1Uq", "nips_2022_9aLbntHz1Uq" ]
nips_2022_LMuh9bS4tqF
Learning Distinct and Representative Modes for Image Captioning
Over the years, state-of-the-art (SoTA) image captioning methods have achieved promising results on some evaluation metrics (e.g., CIDEr). However, recent findings show that the captions generated by these methods tend to be biased toward the "average" caption that only captures the most general mode (a.k.a, language pattern) in the training corpus, i.e., the so-called mode collapse problem. Affected by it, the generated captions are limited in diversity and usually less informative than natural image descriptions made by humans. In this paper, we seek to avoid this problem by proposing a Discrete Mode Learning (DML) paradigm for image captioning. Our innovative idea is to explore the rich modes in the training caption corpus to learn a set of "mode embeddings", and further use them to control the mode of the generated captions for existing image captioning models. Specifically, the proposed DML optimizes a dual architecture that consists of an image-conditioned discrete variational autoencoder (CdVAE) branch and a mode-conditioned image captioning (MIC) branch. The CdVAE branch maps each image caption to one of the mode embeddings stored in a learned codebook, and is trained with a pure non-autoregressive generation objective to make the modes distinct and representative. The MIC branch can be simply modified from an existing image captioning model, where the mode embedding is added to the original word embeddings as the control signal. In the experiments, we apply the proposed DML to two widely used image captioning models, Transformer and AoANet. The results show that the learned mode embedding successfully facilitates these models to generate high-quality image captions with different modes, further leading to better performance for both diversity and quality on the MS COCO dataset.
Accept
The paper tackles the problem of mode collapse in image captioning and provide a method for generating diverse captions. The proposed approach uses a VAE to learn various modes, each of which can produce a different caption, along with various technical innovations to train the model. Experiments with two models on MS COCO demonstrate the effectiveness of the approach. The paper offers useful insights for the challenging and important problem of diverse captioning and will inform future work in this space. I encourage the authors to make the revisions suggested by reviewers to improve the clarity of the writing and also include adequate justification for the choice of their base models.
train
[ "zDfe1Sjsxhj", "5qJ_uzPplCk", "SQ-KOm8hncp", "XUCFY5KgTw", "ftU_UjXv9iR", "Fpwf5PxHp0P", "NSN1DmvHg6-", "q0dvVGnLzyN", "JQV7QTuF8LB", "AErXxR1BcJK", "eJw8mEYdJb", "CNf1xtzIBqy", "0s9JmxVUDgO", "jEzMX9BPxwV", "ya41QVm0gF", "3o8tGqKObDl", "_QOOdnBNBlr", "UnysDwy9Ulj", "uHVOeeylf5", "e94c6ND9XPi", "6CVVx0_CIqw" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your encouragement and valuable comments on our work! We will keep working on the mode collapse problem to get a better understanding.", " Thank you for the insightful comments on our work! We will include the discussions and explanations of the comparison part w.r.t. the metrics and mode architecture in our revised paper and make them clearer.", " Thank you for your valuable suggestions! We agree that it would be helpful for pre-training our CdVAE on a larger-scale dataset. We will consider it as our future work. Also thanks for your interest in our paper.", " Thank you again for the constructive comments! We will add the clarifications in the response to our revised paper.", " Hi Authors!\n\nJust commenting to let you know that I have read + thought about your responses. I am still quite positive about your work, and appreciate the extra experiment you ran with mask-predict. While it doesn't entirely address my thought that perhaps mode collapse could be, it does bolster your (already strong) case that, at least for these particular decoding algorithms, your new method can be helpful (which is a big step!).", " I have read the feedback and other reviewers' comments. Basically, I think this paper is interesting since diverse image captioning is important for both research and applications and the proposed model outperforms existing approaches. But I still strongly recommend using a pre-trained CdVAE, which could be helpful. \n\nI will raise the rating to weak accept.", " I have read the comments from the authors and other reviewers. I decide to raise the rating to borderline accept since\n\n(1) The authors have addressed most of my concerns.\n\n(2) The other reviewers give quite positive comments for this paper on improving VQ-VAE and applying it to the diverse image captioning task.\n\n(3) I still believe that better baselines (e.g. visual language pre-training models) and Karpathy split could make this work more solid.", " The reviewer has gone through the rebuttal provided by the author. The authors have successfully addressed the reviewer's concern and provided convincing explanations, especially in the unfair comparison part in terms of metrics and model architecture. Therefore, the reviewer decided to raise the score and suggest accepting this paper.", " **Q4: I was a bit disappointed by the discussion of potential negative societal impacts: what does image captioning have to do with financial scams? I hope the authors can think a bit more about automatic image captioning's potential negative impacts, though I do not think this particular work poses much marginal dual use risk.**\n\n**A4**: Thank you very much for pointing out this. We were not able to find some potential negative impact for our image captioning model, but perhaps, the Discrete Mode Learning paradigm may be applied in pretrained generative models using large-scale web data to learn specific modes for special communities, which may further cause gender/racial bias problems.", " Thanks for your valuable comments and questions, our responses are in the following. Hope they can address your concerns.\n\n**Q1: This may be explored in prior work, but I was hoping for some information about whether or not the observed mode collapse of models is actually P(y|x) mode collapse, or if it is simply a result of the decoding algorithms favoring blander outputs. Is it possible to score the ground truth captions y_i to see if they collapse to the most generic one vs. just relying on the sampled captions, which may be affected by the decoding approach? (Is mode collapse really occurring with image captioning models, or are the decoding methods the reason why the generated captions seem generic?)**\n\n**A1**: We try a non-autoregressive decoding algorithm termed Mask-Predict [a] for Transformer and find that the mode collapse problem still exists. For example, about 1.2% captions in the training set of MSCOCO start with “a/an image/picture/view/close up of”, however, in the captions generated by Mask-Predict, this only accounts for 0.5%; about 0.7% training captions start with “there is/are”, and about 3.7% training captions are longer than 15 words, but for captions generated by Mask-Predict these numbers are all 0.0%. Thus, we hypothesize that generative models tend to concentrate on the dominant patterns and suppress the rare patterns in the long-tail target distribution, no matter which decoding algorithm we use.\n\n[a] Ghazvininejad, Marjan, et al. \"Mask-Predict: Parallel Decoding of Conditional Masked Language Models.\" Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019.\n\n**Q2: While the ablations clearly demonstrate that the non-autoregressive training objective /model works to spread out mode assignments, I wasn't entirely clear how the autoregressive model avoids mode collapse at inference time, given the discussion/experiments about how \"predict the next token\" by itself does indeed lead to mode collapse. I'm not sure how the authors could address this beyond the ablation they already ran, but perhaps, e.g., the attention of the transformer-based model weighs the VAE cluster token more highly after NAT training? Could be interesting to look at.**\n\n**A2**: In the MIC branch, the mode embedding is added to every input word embedding (just like how the positional embedding is added to the word embedding), thus we cannot directly visualize how the transformer decoder utilizes the mode information. Nevertheless, we find that the decoding algorithm is not the key to cause/avoid the mode collapse problem, i.e., whether using autoregressive decoding or non-autoregressive decoding, the mode collapse problem is likely to happen if no mode information is input into the model (see the answer to Q1). In fact, our purpose of using a non-autoregressive decoding algorithm for the CdVAE branch is not to avoid the mode collapse problem, but instead to improve the quality of the learned mode representations. We believe what really alleviates the mode collapse problem is the use of a set of well-learned mode representations (like those in the Figure 5c of the main submission), which can drive the output distribution towards a specific target for both autoregressive decoding and non-autoregressive decoding. Similar observations have also been found in [b].\n\n[b] Deng, Chaorui, et al. \"Length-controllable image captioning.\" European Conference on Computer Vision. Springer, Cham, 2020.\n\n**Q3: It would have been nice to see this model run on datasets other than MSCOCO. I wonder if this approach could still work even if there were only one reference at training time?**\n\n**A3**: Thanks for your suggestion. We run an experiment on a subsampled version of MSCOCO, where each image is paired with only one caption. We find that with careful tuning of the learning rates and batch sizes of the CdVAE branch and the MIC branch, the proposed method is still able to learn representative modes from the training corpus, e.g., we still find some modes tend to use “there is/are”, “a/an view/image of” and color words. Actually, these are the easiest modes to be discovered. With more tuning, we may be able to find more meaningful modes. We have planned to do this in further work, and also planned to apply the proposed method on large-scale image-text datasets like Conceptual Captions, to see if we can learn more useful modes after large-scale pretraining.\n\n\n", " **Q11: In real-world applications, the users usually only need one caption with the best quality. How does the proposed method search and find the best performing caption among all the modes?**\n\n**A11**: We would like to answer this question from two aspects. On the one hand, it is usually hard to define the “best performing caption” in real-world scenarios. Different users would prefer different caption styles. In this sense, it is better to provide them with multiple captions that have distinct and representative modes. Some advertisement/design text generation applications have indeed adopted this strategy. On the other hand, if the “best performing caption” does exist, we can still train a “best mode selector” to predict the “most suitable mode” directly from the image through policy gradient techniques. This should be effective for some specific modes, like if an image is semantically complicated, a long and detailed caption could be more suitable than a short and brief caption, and if an image is semantically simple, a brief caption is generally preferred. We leave this to future work.\n", " **Q7: In Tab.2, baseline methods can also generate multiple samples by beam search. It would be helpful to include those results for AoANet and Transformer as a baseline comparison.**\n\n**A7**: We evaluate the oracle performance of AoANet and Transformer with beam search (BS) w.r.t. both 20 and 100 samples, and provide the results in the table below.\n\n|Method | #Sample | B@1 | B@2 | B@3 | B@4 | R | M | C | S |\n| ------------ | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |\n| AoANet-BS | 20 | 0.912 | 0.808 | 0.700 | 0.580 | 0.746 | 0.458 | 1.829 | 0.323 |\n| Transformer-BS | 20 | 0.918 | 0.804 | 0.699 | 0.578 | 0.747 | 0.443 | 1.816 | 0.328 |\n|AoANet-DML | 20 |0.917| 0.799 |0.682| 0.554| 0.734| 0.418 |1.734| 0.328|\n|Transformer-DML | 20 | 0.915| 0.788| 0.663| 0.526| 0.726| 0.417 |1.704| 0.325|\n| AoANet-BS | 100 | 0.955 | 0.883 | 0.805 | 0.710 | 0.817 | 0.543 | 2.068 | 0.368 |\n| Transformer-BS | 100 | 0.959 | 0.878 | 0.791 | 0.703 | 0.813 | 0.533 | 2.011 | 0.372 |\n|AoANet-DML | 100 |0.947| 0.850| 0.752| 0.652| 0.782| 0.479| 1.960| 0.356|\n|Transformer-DML| 100 |0.946| 0.849| 0.750 |0.649 |0.780| 0.474| 1.953| 0.354|\n\nNote that, although beam search achieves remarkable oracle performance, its diversity scores are very low (see Table1 of the main submission), indicating that most of the sampled captions are very similar. Moreover, beam search is extremely time-consuming: running the Transformer model with beam size 100 takes 10 hours on our machine. While the proposed model only requires 20 minutes to generate 100 captions. We will include the results and the discussions in the revision.\n\n**Q8: K (total number of modes in the codebook) seems to be an important hyper-parameter. How does the performance vary with respect to K?**\n\n**A8**: In our main submission, we have trained Transformer-DML and AoANet-DML with different codebook sizes (i.e., k = 20, 64, and 100, which can be found in Table 2 and Section 4.5), and evaluated the oracle results. Specifically, Transformer-DML achieves 1.704, 1.871 and 1.953 oracle CIDEr scores with k = 20, 64, and 100, respectively. Moreover, the numbers of effective modes for k = 20, 64, and 100 are 20, 29, and 34, respectively. A similar trend is also observed in the results of AoANet-DML, showing that the oracle performance improves with a larger codebook size, which is reasonable since more effective modes are learned so the recall rate for the modes that appear in the testing split is increased.\n\n**Q9: The ablation study of the second and the last loss terms in Eq.2 is not provided.**\n\n**A9**: As we have mentioned in Section 3.1 of the main submission, we design the loss function following [a] to optimize our mode encoder, masked decoder, and the codebook. Since the codebook will only be optimized by the second loss term, if we remove it, the mode embeddings will never be updated and the whole method will not work. For the last term, we have already tried different values for $\\beta$ and found the performance is quite robust. Similar observations have also been found in [a]. Thus, we directly adopt the default value of $\\beta$ in [a].\n\n[a] Van Den Oord, Aaron, and Oriol Vinyals. \"Neural discrete representation learning.\" Advances in neural information processing systems 30 (2017).\n\n**Q10: Recent VL works typically pre-train a cross-modal transformer model on large image and text corpus and then fine-tune to the downstream target task such as image captioning. It is unknown how the proposed method can be incorporated into the transformer pre-training methods.**\n\n**A10**: It is an interesting topic to combine the proposed method with large-scale vision-language pretraining. Considering that it may need a lot of effort to tune, we plan to explore this direction in further work. We have several initial ideas. The simplest one is to initialize the MIC branch with a pretrained vision-language model, which may lead to higher performance for each learned mode. Besides, we can also directly pretrain the proposed method on large-scale image-text data with the same training objective used in this submission to facilitate a mode-content disentangled vision-language pretraining.\n\n", " **Q3: COS-CVAE uses LSTM as the language generation model, which is weaker than the transformer model. Therefore, it would be really helpful if the author can either: (1) incorporate their method to the same model architecture as COS-CVAE (LSTM), and/or (2) implement COS-CVAE with a transformer model.**\n\n**A3**: We run experiments using the UpDown [b] model (a two-layer LSTM with a visual attention module) for our MIC branch, which is also the language generation model used by COS-CVAE. The oracle performance of this model is 1.688 and 1.942 in terms of CIDEr for 20 and 100 samples, respectively, still outperforms the COS-CVAE by a large margin. In fact, UpDown is a strong model that achieves compatible performance with a 6-layer Transformer model in a general image captioning setting (1.099 CIDEr vs. 1.114 CIDEr on Karpathy’s test split), which means that two-layer LSTMs may already have enough capacity for the COCO dataset. We will give more discussions on this in the revision.\nMoreover, considering that COS-CVAE requires a pre-processing step to construct pseudo supervisions with the help of a pretrained joint vision-language embedding model, the proposed end-to-end learning method could be more convenient to use than COS-CVAE.\n\n[b] Anderson, Peter, et al. \"Bottom-up and top-down attention for image captioning and visual question answering.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.\n\n**Q4: In Sec. 4.5, only qualitative results and the oracle CIDEr scores are provided. What about the diversity scores?**\n\n**A4**: We cannot directly compute the diversity scores under a fair setting for the models in Figure 5a (DML w/o NAT) and Figure 5b (DML w/o Hungarian assign) since they only have five and three effective modes respectively and cannot provide enough candidates for consensus reranking. Nevertheless, we still calculate the SelfCIDEr scores for the models in Figure 5 by skipping the consensus reranking step and calculating the score within three randomly sampled captions for each image. The diversity scores are 0.64, 0.73, and 0.86 for DML w/o NAT, DML w/o Hungarian assign, and the original DML.\n\n**Q5: In Sec. 4.5, any discussions or analyses of the collapsed modes? Do they really lead to the same output samples? Even the proposed method has a mode collapse issue.**\n\n**A5**: In Section 4.5, we train three models with a default codebook size of 64. The first two models, DML w/o NAT and DML w/o Hungarian assign only activate a few entries of the codebook (five and three, respectively), and the output samples generated by different modes are indeed very similar for both of these two models, indicating a severe mode collapse issue. This is also reflected by their low diversity scores (see **A4**). The proposed DML activates 29 out of 64 entries and the output samples are very diverse and have some clear language patterns (see the diversity scores in **A4** and the visualization results in the supplementary material). Although it does not fully utilize the codebook, we hypothesize that the distinct and representative modes contained in the training corpus of the COCO dataset may not be large since it only contains descriptive sentences. Thus, the proposed DML can effectively alleviate the mode collapse issue. We will give more discussions on the collapsed mode in the revision.\n\n**Q6: In line 208, it says the mode embedding is added to the word embedding of each token in $y_i$. Does that mean for each $w_{j<t}$ in Eq.5 when generating the next token $w_t$?**\n\n**A6**: Yes, we add the mode embedding to previous word tokens $w_{j<t}$ element-wisely when generating the next token $w_t$. We will make it clear in the revision.\n", " Thanks for your valuable comments, our responses are in the following. Hope them can address your concerns.\n\n**Q1: The training objectives of CdVAE are very similar to [a], but the reviewer considers the incorporation of an existing method for solving an interesting question in a new setting as a contribution. Nevertheless, it would still be good to describe the difference.**\n\n[a] Van Den Oord, Aaron, and Oriol Vinyals. \"Neural discrete representation learning.\" Advances in neural information processing systems 30 (2017).\n\n**A1**: While the training objective is indeed inspired by [a], the resultant model is totally different. Specifically, we decouple the input (caption) of CdVAE into two types of information: the mode and the content, where the encoder of CdVAE only extracts the mode information from the caption, and for the content information, we adopt the image representations shared from the MIC branch. This enables the CdVAE branch to explicitly focus on the learning of mode. On the other hand, the original method in [a], as well as previous Conditional VAE-based diverse image captioning methods (like COS-CVAE), do not have this information decoupling procedure.\nMore importantly, as we have mentioned in the main submission, simply applying the learning method of [a] to our setting did not work well (see Figure 5 of the main submission). To aid this, we introduce several new components, i.e., the Hungarian mode assignment and the fully non-autoregressive objective, which have been proved to be the key ingredients to make the proposed method work. Lastly, we also make several important designs to make the convergence speed of the two branches compatible and thereby prevent overfitting, as described in Section 3.4 of our main submission and verified in Section A of the supplementary materials. We will clarify these differences carefully in the revision.\n\n**Q2: The performance improvement in terms of the diversity scores in Table 1 is not significant. The SelfCIDEr score is also missing.**\n\n**A2**: We evaluate the SelfCIDEr of COS-CVAE which is 0.79, inferior to the performance of our method (0.83). Although the improvement of our method on diversity metrics may not be significant, we would like to argue that:\n1) The proposed method focuses on not only the diversity of captions, but more importantly, the controllability. Specifically, we have discovered several distinct and representative language patterns from the generated captions which can be controlled by some of the learned modes (see the visualization results in the main submission and the supplementary material). This kind of controllability is lacking in the previous SoTA COS-CVAE as well as other diverse image captioning models.\n2) When evaluating the diversity score on the testing split, the nearest neighbor search in the consensus reranking step is performed in the embedding space of VGGNet pretrained on the ImageNet dataset, which may lead to inaccurate search results due to the low representation ability of VGGNet and the domain shift between ImageNet and COCO. Thus, the captions used for diversity evaluation may not be semantically aligned with the test image. Moreover, the diversity metrics themselves only focus on n-gram diversities and ignore the semantic alignment (i.e., they can be easily hacked with random tokens), further exposing the misalignment problem.\n\nThus, it would be better to consider the results from both Table 1 and Table 2 when assessing the ability of a diverse image captioning model. I.e., a high diversity score and a high oracle score mean that the generated captions may have a high recall rate for the diverse modes that appear in the reference captions, and with each mode the model is able to produce captions with good quality (i.e., fluency and semantic alignment), indicating a human-like behavior. On the other hand, if the diversity score is high but the oracle score is low, this means the generated captions may contain incorrect or misaligned tokens which may not be as diverse as the diversity score indicates.\n\nIn this sense, compared with COS-CVAE, our model achieves better performance in terms of the diversity scores and is also less affected by the misalignment problem due to the significantly better oracle scores, suggesting that the real diversity gap between the proposed method and COS-CVAE could be underestimated. We will give more discussions on this in the revision.\n", " **Q1: The weakest part of the paper is that the framework seems just combines VQ-VAE and a normal image captioning model, though some findings are interesting, such as directly using VQ-VAE cannot capture many modes and to mitigate this issue, this paper introduces Hungarian algorithm and masked decoder.**\n\n**A1**: Thanks for your valuable comments. While the proposed method is indeed inspired by VQ-VAE, the resultant model is totally different. Specifically, we decouple the input (caption) of CdVAE into two types of information: the mode and the content, where the encoder of CdVAE only extracts the mode information from the caption, and for the content information, we adopt the image representations shared from the MIC branch. This enables the CdVAE branch to explicitly focus on the learning of mode. On the other hand, the original VQ-VAE, as well as previous Conditional VAE-based diverse image captioning methods, do not have this information decoupling procedure.\nMore importantly, simply applying the learning method of VQ-VAE to our setting did not work well as shown in Figure 5 of our main submission. To aid this, we introduce several new components, i.e., the Hungarian mode assignment and the fully non-autoregressive objective, which have been proved to be the key ingredients to make the proposed method work. Lastly, we also make several important designs to make the convergence speed of the two branches compatible and thereby prevent overfitting, as described in Section 3.4 of our main submission and verified in Section A of the supplementary materials. We will clarify these differences carefully in the revision.\n\n**Q2: The CdVAE branch is separate from the image captioning branch, and CdVAE can be trained using a large-scale dataset composed of texts, so I am wondering about the performance of using a large-scale dataset.**\n\n**A2**: Thanks for this suggestion. The proposed CdVAE can be trained using a large-scale text-only dataset as long as there are two inputs: one provides the mode information and the other provides the content information. Since this requires a lot of data and computational resources, as well as careful network architecture tuning, we plan to explore it in further work.\n", " **Q5: The common captioning models generate one caption for each image, while this work and some previous related works generate 20 or 100 captions for each image. Which kind is better?**\n\n**A5**: The general image captioning models usually cannot make a good balance between quality and diversity & controllability. Thus, diverse and controllable image captioning methods are proposed to tackle this problem, which can be as good and efficient as general image captioning models when generating one caption for each image, while also having the ability to produce multiple different captions describing the image in various ways. In this sense, the latter is more close to human behavior (humans can describe an image in various ways) and could be more useful in practice.\n\n**Q6: The visual language pre-training models achieve good performance on image captioning recently. Do you think our (your) new methods should apply to those methods?**\n\n**A6**: Thanks for the suggestion. The proposed Discrete Mode Learning (DML) is a general learning paradigm and does not rely on specific backbones. This is why we can deploy it on both Transformer and AoANet. Large-scale vision-language pretraining models are normally built based on Transformer structure so we believe our DML can be applied to them as well. However, large-scale vision-language pretraining models generally require huge costs to train. Thus, we have planned to do this in further work.\n", " Thanks for your comments, our responses are in the following. Hope they can address your concerns.\n\n**Q1: The main models used in this paper are Transformer (2017) and AoANet (2019), which are out of date. Consider more recent models like M2Transformer (2020), XLAN (2020), and pretraining models like Oscar (2020).**\n\n**A1**: The proposed Discrete Mode Learning (DML) is a general learning paradigm and we expect it can be easily applied in many existing image captioning models and improve their controllability and diversity. Therefore, we choose two widely-used and representative architectures, i.e., Transformer and AoANet, to show the generalization ability of DML. Specifically, most current state-of-the-art image captioning models, like M2Transformer and many vision-language pretraining models, are based on the Transformer architecture. By showing the ability of DML on Transformer, we show the potential ability of DML on these Transformer-based state-of-the-art models.\nMoreover, the performance of Transformer and AoANet is also competitive with the state-of-the-art M2Transformer and XLAN. With Self-Critical training and without vision-language pretraining, Transformer achieves 127.7 CIDEr score [a] and AoANet achieves 129.8 [b], while M2Transformer and XLAN achieve 131.2 [c] and 132.0 [d], respectively, indicating a close performance gap. Therefore, we believe Transformer and AoANet are two representative and strong baselines when vision-language pretraining is not performed, and they are good enough to illustrate the superior performance led by our DML model. Thanks for this suggestion, we will also try our method on more powerful pretraining models (which is not the major focus of this paper) in future work.\n\n[a] [https://github.com/ruotianluo/self-critical.pytorch/blob/master/MODEL_ZOO.md](https://github.com/ruotianluo/self-critical.pytorch/blob/master/MODEL_ZOO.md)\n\n[b] Huang, Lun, et al. \"Attention on attention for image captioning.\" Proceedings of the IEEE/CVF international conference on computer vision. 2019.\n\n[c] Cornia, Marcella, et al. \"Meshed-memory transformer for image captioning.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\n\n[d] ​​Pan, Yingwei, et al. \"X-linear attention networks for image captioning.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\n\n**Q2: Since the codebook is in latent space, it is hard to control the semantic meaning of a certain mode. It is a weak point of the latent space based method. The diversity comes from different modes but we can not know which mode is needed. For example, people rarely have the patience to choose the needed captions from 100 samples.**\n\n**A2**: As we have mentioned in the Limitation section, without explicit annotations for the mode of the captions, it is indeed difficult to control the semantic meaning of each learned mode. However, compared with the continuous latent code used in other latent space based methods, our discrete modes are more interpretable where some of them facilitate clear and specific language patterns, as shown in the visualization results in the main submission and the supplementary material. In this sense, people can acquire captions with their preferred styles by choosing the corresponding modes, which is very useful in scenarios like advertisement/design text generation.\nMoreover, the ability of learning interpretable and representative modes via a purely unsupervised manner also makes the proposed model much more convenient to use compared with those that require additional annotations, and the learned modes are also not restricted within the predefined set. These are the specific advantages of the proposed method.\n\n**Q3: This paper uses the m-RNN split, and the most recent image captioning works use Karpathy split [4r]. The results are hard to compare in this case. It will be a contribution if this paper also turns the previous typical works (e.g. COS-CVAE) into the Karpathy split.**\n\n**A3**: Since our submission mainly focuses on the task of Diverse and Controllable Image Captioning, we follow the previous works in this area to use the m-RNN split of the COCO dataset, which is fair in comparison. When evaluating the performance of each mode, the baseline models (AoANet and Transformer) are also trained and evaluated under the m-RNN split, as we have mentioned in Table 3 of our main submission, which is also fair in comparison.\n\n**Q4: Paper writing needs to be refined. In Figure 2, the text of \"y1, y2, y3\" appears three times exactly the same, which does not help to show the main idea.**\n\n**A4**: Thanks for your suggestion, we will revise the figure carefully.\n", " This paper aims to learn distinct and representative modes for image captioning. The paper propose Discrete Mode Learning (DML), which is a good method to improve the diversity of image captioning. The DML optimizes a dual architecture that consists of an image-conditioned discrete variational autoencoder (CdVAE) branch and a mode-conditioned image captioning (MIC) branch.\n Strengths:\n1. The experimental results show that the DML works well. \n2. The Discrete Mode Learning (DML) method is reasonable, and the training loss is well designed.\n\nWeaknesses:\n1. The main models used in this paper are Transformer (published in 2017) and AoANet (published in 2019), which are out of date. M2Transformer and XLAN are open sources. And many pre-training models (like Oscar [3r]) are also open source. \n2. Since the codebook is in latent space, it is hard to control the semantic meaning of a certain mode. It is a weak point of the latent space based method. The diversity comes from different modes but we can not know which mode is needed. For example, people rarely have the patience to choose the needed captions from 100 samples.\n3. This paper uses the m-RNN split, and the most recent image captioning works use Karpathy split [4r]. The results are hard to compare in this case. It will be a contribution if this paper also turns the previous typical works (e.g. COS-CVAE) into the Karpathy split.\n4. Paper writing needs to be refined. \nIn Figure 2, the text of \"y1, y2, y3\" appears three times exactly the same, which does not help to show the main idea.\n\n\n[1r] Marcella Cornia et al. “Meshed-Memory Transformer for Image Captioning” computer vision and pattern recognition (2019).\n[2r] Yingwei Pan et al. “X-Linear Attention Networks for Image Captioning” computer vision and pattern recognition (2020).\n[3r] Xiujun Li et al. “Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks” European conference on computer vision (2020).\n[4r] Andrej Karpathy and Li Fei-Fei. “Deep Visual-Semantic Alignments for Generating Image Descriptions” IEEE Transactions on Pattern Analysis and Machine Intelligence (2014).\n 1. The common captioning models generate one caption for each image, while this work and some previous related works generate 20 or 100 captions for each image. Which kind is better?\n2. The visual language pre-training models achieve good performance on image captioning recently. Do you think our new methods should apply to those methods? There is no potential negative societal impact of this work.", " This paper proposed a diverse image captioning model, namely conditioned discrete variational autoencoder (CdVAE), which is able to learn multiple modes of captions. Plus, to learn better mode embeddings, a masked decoder and Hungarian algorithm are introduced into the training process. Finally, extensive experiments are conducted, showing that the proposed model can generate diverse captions for an input image. The weakest part of the paper is that the framework seems just combines VQ-VAE and a normal image captioning model, though some findings are interesting, such as directly using VQ-VAE cannot capture many modes and to mitigate this issue, this paper introduces Hungarian algorithm and masked decoder. Another strength of the paper is that extensive experiments are conducted and the performance of the paper is relatively good. Plus, the paper is well written and can be easily followed. The CdVAE branch is separate from the image captioning branch, and CdVAE can be trained using a large-scale dataset composed of texts, so I am wondering about the performance of using a large-scale dataset. N/A", " This paper presents a strategy to generate a diverse set of captions for image captioning. The proposed method is composed of two parts: (1) CdVAE which learns a discrete code book of mode embedding, and (2) MIC which leans to generate captions conditioned on a mode embedding. The proposed method demonstrates slightly better diversity scores and quality scores compared to previous SoTA COS-CVAE. ### Strength\n\n1. The training objectives of CdVAE is very similar to [1], but the reviewer considers the incorporation of an existing method for solving an interesting question in a new setting (eg. image captioning) as a contribution. Nevertheless, it would still be good to describe the difference.\n\n1. The proposed method seems to outperform SoTA COS-CVAE in both diversity and quality (more in the weakness section).\n\n1. The core idea and the proposed method is easy to understand and follow.\n\n1. Fig.5 successfully demonstrates how the proposed training techniques affect the learned codebook of mode embedding.\n\n[1] Van Den Oord, Aaron, and Oriol Vinyals. \"Neural discrete representation learning.\" Advances in neural information processing systems 30 (2017).\n\n### Weakness\n\n1. The comparison against SoTA COS-CVAE is unfair. First of all, the performance improvement in terms of the diversity scores in Tab.1 is not significant. The SelfCIDEr score is also missing. Since COS-CVAE released its code, it would be good to re-run the method and include the number. Secondly, COS-CVAE uses LSTM as the language generation model, which is weaker than the transformer model. Therefore, given the slight performance improvement and the usage of a stronger transformer model in this work, it is hard to conclude that the performance improvement compared to SoTA COS-CVAE is actually coming from the proposed method.\n\n1. In Sec. 4.5, only qualitative results and the oracle CIDEr scores are provided. What about the diversity scores? Furthermore, any discussions or analyses of the collapsed modes? Do they really lead to the same output samples? Even the proposed method has a mode collapse issue. Please provide further studies.\n\n1. In line 208, it says the model embedding is added to the word embedding of each token in $y_i$. Does that mean for each $w_{j<t}$ in Eq.5 when generating the next token $w_t$? \n\n1. In Tab.2, baseline methods can also generate multiple samples by beam search. It would be helpful to include those results for AoANet and Transformer as a baseline comparison.\n\n1. K (total number of modes in the codebook) seems to be an important hyper-parameter. How does the performance vary with respect to K?\n\n1. The ablation study of the loss terms (1) $| \\text{sg}[e(y_i)] − q(y_i)|^2$ and (2) $| e(y_i) − \\text{sg}[q(y_i)]|^2$ in Eq.2 is not provided. - The biggest concern the reviewer has is how to fairly compare with SoTA COS-CVAE method as described in the weakness section. It would be really helpful if the author can either: (1) incorporate their method to the same model architecture as COS-CVAE (LSTM), and/or (2) implement COS-CVAE with a transformer model. In either way, we can rule out the possibility of architectural improvement and focus on the contribution of the proposed method.\n\n- Further analyses of the collapsed mode in Fig.5 as described in the weakness section.\n\n- Some ablations are missing such as K and the loss terms. - Recent VL works typically pre-train a cross-modal transformer model on large image and text corpus and then fine-tune to the downstream target task such as image captioning. It is unknown how the proposed method can be incorporated into the transformer pre-training methods.\n\n- In real-world applications, the users usually only need one caption with the best quality. How does the proposed method search and find the best performing caption among all the modes?", " The authors consider the mode collapse problem in image caption\ngeneration. They give a diverse caption generation model that differs\nfrom prior work because 1) it's more controllable than conditioning on\nrandomly-generated latent codes; and 2) the control codes are learned\nvia VAE clustering, rather than pre-specified by styles in the\ntraining data, e.g., \"funny\". Several modeling innovations are key to\nthe end-to-end training working, e.g., a non-autoregressive component,\nand a hungarian method for mode assignment. The authors evaluate their\nmodel on the MSCOCO benchmark, demonstrating that their model 1)\ngenerates diverse and controllable-in-style captions based on\ninterpretable clusters; and 2) performs quite well in the\noracle-conditioned setup according to standard evaluation metrics. Strengths:\n\nI really liked this work! The introduction frames the problem of mode\ncollapse quite effectively, and the story flows well. The modeling\ncomponents are well-motivated, interesting, and clever. The empirical\nresults are strong. The ablations are appropriate. The qualitative\nresults are convincing. I expect that this approach could be useful\nfor tasks beyond image captioning, too, e.g., in cases where there\nisn't one objectively correct generation target, a similar VAE-style\nclustering could come in handy, without additional labeling, to give\ncontrol over generations when the targets implicitly contain different\nstyles.\n\nWeaknesses:\n\n- This may be explored in prior work, but I was hoping for some\n information about whether or not the observed mode collapse of\n models is actually P(y|x) mode collapse, or if it is simply a result\n of the decoding algorithms favoring blander outputs. Is it possible\n to score the ground truth captions y_i to see if they collapse to\n the most generic one vs. just relying on the sampled captions, which\n may be affected by the decoding approach?\n\n- While the ablations clearly demonstrate that the non-autoregressive\n training objective (/model) works to spread out mode assignments, I\n wasn't entirely clear how the autoregressive model avoids mode\n collapse at inference time, given the discussion/experiments about\n how \"predict the next token\" by itself does indeed lead to mode\n collapse. I'm not sure how the authors could address this beyond the\n ablation they already ran, but perhaps, e.g., the attention of the\n transformer based model weighs the VAE cluster token more highly\n after NAT training? Could be interesting to look at.\n\n- It would have been nice to see this model run on datasets other than\n MSCOCO. I wonder if this approach could still work even if there\n were only one reference at training time?\n\n\nOverall: I really liked this work, and don't have many improvements to\nsuggest beyond discussing the detail about mode collapse vs. decoding\nas discussed above. It's possible that I missed something because I am\nless up-to-date on the diverse captioning literature, but if I didn't,\nI hope this work appears at NeurIPS. - Is mode collapse really occurring with image captioning models, or are the decoding methods the reason why the generated captions seem generic? I was a bit disappointed by the discussion of potential negative\nsocietal impacts: what does image captioning have to do with\nfinancial scams? I hope the authors can think a bit more about\nautomatic image captioning's potential negative impacts, though\nI do not think this particular work poses much marginal dual use\nrisk." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "ftU_UjXv9iR", "q0dvVGnLzyN", "Fpwf5PxHp0P", "NSN1DmvHg6-", "JQV7QTuF8LB", "ya41QVm0gF", "UnysDwy9Ulj", "e94c6ND9XPi", "AErXxR1BcJK", "6CVVx0_CIqw", "CNf1xtzIBqy", "0s9JmxVUDgO", "jEzMX9BPxwV", "e94c6ND9XPi", "uHVOeeylf5", "_QOOdnBNBlr", "UnysDwy9Ulj", "nips_2022_LMuh9bS4tqF", "nips_2022_LMuh9bS4tqF", "nips_2022_LMuh9bS4tqF", "nips_2022_LMuh9bS4tqF" ]
nips_2022_7rcuQ_V2GFg
Parameter-Efficient Masking Networks
A deeper network structure generally handles more complicated non-linearity and performs more competitively. Nowadays, advanced network designs often contain a large number of repetitive structures (e.g., Transformer). They empower the network capacity to a new level but also increase the model size inevitably, which is unfriendly to either model restoring or transferring. In this study, we are the first to investigate the representative potential of fixed random weights with limited unique values by learning diverse masks and introduce the Parameter-Efficient Masking Networks (PEMN). It also naturally leads to a new paradigm for model compression to diminish the model size. Concretely, motivated by the repetitive structures in modern neural networks, we utilize one random initialized layer, accompanied with different masks, to convey different feature mappings and represent repetitive network modules. Therefore, the model can be expressed as \textit{one-layer} with a bunch of masks, which significantly reduce the model storage cost. Furthermore, we enhance our strategy by learning masks for a model filled by padding a given random weights vector. In this way, our method can further lower the space complexity, especially for models without many repetitive architectures. We validate the potential of PEMN learning masks on random weights with limited unique values and test its effectiveness for a new compression paradigm based on different network architectures. Code is available at \href{https://github.com/yueb17/PEMN}{\textcolor{magenta}{https://github.com/yueb17/PEMN}}.
Accept
The paper studies the use of random weights together with learnable masks. Authors demonstrate that such training approach for neural network can reduce the model storage requirements and has applications to network compression. Reviewer appreciated the novelty of the idea and the extensive experiments on various architectures. Adding experiments that would go beyond small-scale datasets would further strengthen the quality of the paper and its potential impact.
test
[ "w94uAWJ8C5T", "Bn5fJNWoPYB", "fhWzUiE4RkI", "1y27vPXQnef", "e07jkflbeqH", "O5N52SwmNvE", "igp-nWXxdQ", "15B1oDmbwym", "Pw5m0j2bGvIw", "1v2U7GEqJU4", "82F608bOz38", "X9B5vtUf9aq", "QjYOhgragtL", "x2oKDQhGqI", "c9KnauzKDEf", "8Xv4Qt1Mx3S", "bb9HGoNOeGT", "rwtzqGmO_mG", "p5t3mbiwsGU", "A7Csk-v-i6f", "KKKSA4fougQ", "cFl87AKce0", "KsrCUGqWTh" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 9nMH,\n\nWe appreciate the reviewer's recognition and support of our work and rebuttal. We also thank the reviewer's suggestions to help us further improve our work for both scientific exploration and compression evaluation. We will prepare them accordingly for our draft to deliver a better final version.\n\nThank you very much for your time!\n\nBest wishes,\n\nAuthors of Paper1207", " Dear Reviewer jejx,\n\nWe appreciate the reviewer's recognition of our rebuttal, careful reading of our draft, and further support of our work. We will reorganize our draft and emphasize our key points and contributions to deliver a better final version.\n\nThank you very much for your time!\n\nBest wishes,\n\nAuthors of Paper1207", " Dear Reviewer vyrt,\n\nWe appreciate the reviewer's recognition of our rebuttal and pointing out the disadvantages of the current draft. We will further clarify our unclear description and supplement the corresponding content for our final version.\n\nThank you very much for your time!\n\nBest wishes,\n\nAuthors of Paper1207", " I’d like to thank the authors for the rebuttal. The rebuttal addresses many of my concerns and the authors promise to update and improve the presentation of the paper and outline ways of how they will do so. In terms of evaluation, the authors added interesting experiments in the rebuttal that makes the validation stronger. However, I agree with other reviewers that adding experiments that would go beyond small-scale datasets would strengthen the paper. Moreover, I’m still unconvinced about the potential impact of this work. Nevertheless, after the rebuttal the paper is quite complete with some interesting observations and I'll update my score accordingly.", " Most of my concerns are addressed by the authors' response.\nAlthough they don't compare with stronger baselines in the current version, the reasons claimed by the authors are convincing to me.\n\nWhat is the representative capacity of random weights? The authors make a meaningful exploration to this fundamental question. When I first review this paper, I focus on the pruning but somewhat ignore the scientific question behind it.\n\nSo I improve my score from 6 to 7.\n", " I thank the reviewers for their thorough answers. My main concern {W2} is resolved and I'll update my score. I still think that adding an experiment on high resolution data {W1} (e.g. ImageNet) would be beneficial for the camera ready version. I do not see how such an experiment would be unfair in terms of comparisons to your own baselines. Lastly, I believe that exploring a scientific question should also include scalability.", " Dear Reviewer PU7f,\n\nWe appreciate the reviewer's recognition of our rebuttal and the further support of our work.\n\nThank you very much for your time!\n\nBest wishes,\n\nAuthors of Paper1207\n", " Thanks for your response. My concerns have been addressed in the response. I think it is a novel work with extensive evaluations.", " Dear Reviewer 9nMH,\n\nWe appreciate the reviewer's recognition of our work and providing detailed comments for us to make further improvement! For the reviewer's concerns and comments, we have responded accordingly with supplemented experimental results and necessary discussions or clarifications. Given the NeurIPS final discussion deadline (08/09) is approaching, we really hope to have a further discussion with the reviewer 9nMH to see if our responses solve the reviewer's concerns.\n\nThank you very much for your time!\n\nBest wishes,\n\nAuthors of Paper1207", " Dear Reviewer vyrt,\n\nWe appreciate the Reviewer vyrt's valuable comments for our draft! We have responded the reviewer's comments by clarifying the unclear points and making corresponding revisions for each part of our paper. Given the NeurIPS final discussion deadline (08/09) is approaching, we really hope to have a further discussion with the reviewer vyrt to see if our responses solve the reviewer's concerns.\n\nThank you very much for your time!\n\nBest wishes,\n\nAuthors of Paper1207", " ## Supplementary repeated experiments\nWe further supplement the repeated empirical results for the rest of our experiments.\n\n**Figure.5**\nConvMixer 256-dim: CIFAR10\n\n|Depth| pr0.9 | pr0.92 | pr0.94 | pr0.96 | Ours-RP_1e-1 | Ours-RP_1e-2| Ours-RP_1e-3 |\n| :----: | :-----: | ------------ | :-----: | :-----: |:-----: |:-----: |:-----: |\n| 6 | 84.34 (0.19) // 84.45 (0.16) | 83.08 (0.26) // 83.31 (0.25) | 81.17 (0.30) // 80.99 (0.23) | 77.10 (0.35) // 77.34 (0.28)| 88.70 (0.21) | 88.89 (0.13) | 88.52 (0.15) |\n| 8 | 86.93 (0.14) // 86.88 (0.28) | 85.30 (0.32) // 84.98 (0.11) | 83.37 (0.35) // 83.72 (0.26) | 79.88 (0.27) // 79.79 (0.29)| 90.63 (0.14) | 90.59 (0.06) | 90.06 (0.22) |\t\n\nPlease note that we added compression baselines results in the first four columns (first/second terms are random/magnitude pruning). And we directly refer to the results provided above (repeated results for subfigure (a) and subfigure (b) of Figure.3). The newly added results also support our conclusions.\n\n**Supplementary Figure.1**\n\n**Subfigure (a):**\nConvMixer 256-dim: CIFAR100\n\n|Depth| pr0.9 | pr0.92 | pr0.94 | pr0.96 | Ours-RP_1e-1 | Ours-RP_1e-2| Ours-RP_1e-3 |\n| :----: | :-----: | ------------ | :-----: | :-----: |:-----: |:-----: |:-----: |\n| 8 | 63.27 (0.16) // 62.95 (0.24) | 61.80 (0.19) // 61.02 (0.34) | 58.10 (0.28) // 58.72 (0.30) | 55.35 (0.29) // 54.66 (0.23)| 64.78 (0.24) | 64.96 (0.18) | 62.83 (0.27) |\n\n**Subfigure (b):**\nConvMixer 256-dim: Tiny-ImageNet\n\n| Depth | pr0.9 | pr0.92 | pr0.94 | pr0.96 | Ours-RP_1e-1 | Ours-RP_1e-2| Ours-RP_1e-3 |\n| :----: | :-----: | ------------ | :-----: | :-----: |:-----: |:-----: |:-----: |\n| 6 | 44.72 (0.23) // 44.58 (0.16) | 42.88 (0.36) // 43.98 (0.21) | 41.73 (0.29) // 42.24 (0.17) | 40.10 (0.22) // 40.78 (0.34) | 45.31 (0.27) | 44.29 (0.21) | 44.56 (0.33) |\n\nWe also supplement the repeated results for experiments in the supplementary material. They are based on ConvMixer for CIFAR100 and Tiny-ImageNet, respectively. The first/second terms are random/magnitude pruning results in the first four columns. The newly added experimental results also support our conclusion in the draft.", " **Section.4.3:**\n\n**Figure.4**\n\n**subfigure (a):**\nResNet56/ResNet32: CIFAR10\n\n|Network| pr0.9 | pr0.92 | pr0.94 | pr0.96 | Ours-MP | Ours-RP_1e-1| Ours-RP_1e-2 |\n| :----: | :-----: | ------------ | :-----: | :-----: |:-----: |:-----: |:-----: |\n| ResNet56| 86.35 (0.22) // 86.74 (0.28)| 85.55 (0.14) // 86.01 (0.17) | 85.01 (0.19) // 84.62 (0.27) | 81.93 (0.16) // 82.04 (0.18) | 88.13 (0.19)| 88.36 (0.24)| 87.97 (0.21)|\n| ResNet32| 84.21 (0.13) // 84.22 (0.05)| 83.29 (0.11) // 83.60 (0.20) | 82.08 (0.16) // 81.56 (0.28) | 79.36 (0.13) // 79.12 (0.15) | 86.33 (0.14) | 86.58 (0.09) | 86.39 (0.18) |\n\n**subfigure (b):**\nResNet56/ResNet32: CIFAR100\n\n|Network| pr0.9 | pr0.92 | pr0.94 | pr0.96 | Ours-MP | Ours-RP_1e-1| Ours-RP_1e-2 |\n| :----: | :-----: | ------------ | :-----: | :-----: |:-----: |:-----: |:-----: |\n| ResNet56| 55.21 (0.36) // 54.01 (0.38) | 51.96 (0.33) // 52.54 (0.16) | 49.72 (0.18) // 49.70 (0.25) | 44.00 (0.13) // 44.18 (0.26)| 56.39 (0.29) | 56.58(0.21) | 55.84 (0.27) |\n| ResNet32| 49.21 (0.29) // 49.35 (0.14) | 47.36 (0.11) // 47.38 (0.07) | 43.70 (0.41) // 44.60 (0.29) | 39.78 (0.23) // 39.54 (0.37)| 51.76 (0.25) | 50.78 (0.30) | 51.94 (0.19) |\n\nIn the tables above, the first four columns show the compression baselines (the first // second items represent random and magnitude pruning).\nBased on the results shown above, we find our strategies generally outperform the model compression baselines and these results support our conclusion in the draft.\n\n## Code release:\nOur code will be rearranged and released for reproducibility.\n\n## Writing:\nWe appreciate the reviewer's careful reading and pointing out our typos. We will polish our draft to fix these typos and unclear descriptions for a better version.\n\n## Sparse selection ratio K:\nWe follow the Popup method to set ratio K as 0.5, which is the best ratio to conduct the sparse selection. All the experiments are based on ratio K=0.5. Thanks for pointing out our missed details and we will clarify this point in our final version.\n\n## Computational details about compression ratio of MP/RP:\nWe thank the reviewer's careful reading.\nWe use a demo numerical example to illustrate the calculations of the compression ratio for our strategies.\nWe take our methods on ConvMixer with 6-block/256-dim in Figure.5 (in purple color) as an example: (1) If we initialize the model randomly (the regular training setting), we assume all initialized parameters are unique. In this case, this number is 432460; (2) We start from our \"MP\" strategy (using the largest layer as prototype). In this case, the largest layer contains 65536 parameters. Please note our \"RP\" strategy is termed based on its prototype size compared with \"MP\" size. The strategies used in this case are \"RP 1e-1\", \"RP 1e-2\", and \" RP 1e-3\" (mentioned in caption of Figure.5). Therefore, their prototype sizes (number of unique values) are $0.1 \\times 65536$, $0.01 \\times 65536$, and $0.001 \\times 65536$, respectively. They are around 6554, 655, and 66; (3) Our compression ratio is compared with the full model size (100%), which requires to restore all 432460 float values. Compared with it, our strategies firstly require restore their prototype as float values (6554, 655, and 66). Correspondingly, they require 6554/432460, 655/432460, and 66/432460 partition of the full size (100%) model. They are 1.5%, 0.15%, and 0.015%. Please note since we sequentially make padding operations to use prototype to fill up the whole network, thus, no additional storage cost needed here. After obtaining the prototype, we need to restore a bunch of masks. Because the prototype constructs the whole model with the same structure as the original one, our masks have exactly the same size accordingly, which have in 432460 numbers in total. We can efficiently restore the binary mask using 1/32 storage cost compared with float values. Thus, this part costs 1/32 partition of the full size (100%). It is 3.1%; (4) We combine these two parts of cost (prototype and masks) as 1.5%+3.1%, 0.15%+3.1%, and 0.015%+3.1%. They are 4.6%, 3.25%, and 3.115%. Converting them to compression ratio, 1-4.6%, 1-3.25%, and 1-3.115%, they are 95.4%, 96.75%, 96.885%, respectively. Correspondingly, the x-axix values of three purple points on the right top corner of Figure.5 represent these three ratios. We take this case as an example and other cases follow the same way to calculate the compression ratio.\n\n\n## Limitations:\nWe will supplement this limitation in our main draft to provide a better understanding of our work.", " # Response to the Reviewer 9nMH:\nWe appreciate the reviewer's recognition of our study and detailed comments to help us make further improvement for our submission. We summarize the reviewer’s comments and respond to them as below.\n\n## Experiments on high-resolution dataset:\nWe agree with the reviewer that adding more challenging datasets (e.g, high-resolution image datasets) is very helpful to improve our work.\nHowever, since our current work aims to explore a scientific question and propose a novel prototol for model compression, it is relatively hard to conduct more fine-grained evaluations from a fair and systematical aspect.\nPlease refer to our response to the Reviewer jejx for more discussion details about this point and we leave such kinds of exploration in our future work.\n\n## Repeated experiments:\nWe thank the reviewer pointing it out. We supplement the repeated experimental results as below with necessary analysis. Due to the rebuttal time limitation, we firstly supplement the main results from Figure.3 and Figure.4 to address the reviewer's concern. Please note due to the 9-page limitation of draft revision, we did not change the original draft and make response here. We repeated each experiment three times and report mean and std.\n\n\n**Section.4.2:**\n\n**Figure.3**\n\n**subfigure (a):**\nConvMixer: 6-block\n\n|Dim| Mask|One-layer|MP|RP_1e-1|RP_1e-2|RP_1e-3|RP_1e-4|RP_1e-5|\n| :----: | :-----: | ------------ | :-----: | :-----: |:-----: |:-----: |:-----: |:-----: |\n| 256| 89.31 (0.16) | 88.80 (0.11) | 88.97 (0.08) | 88.70 (0.21) | 88.89 (0.13) | 88.52 (0.15)| 85.76 (0.34)| 81.01 (0.38)|\n| 512| 91.90 (0.03) | 91.87 (0.14) | 92.02 (0.02) | 92.07 (0.12)| 92.13 (0.16) | 92.05 (0.03) | 90.55 (0.14)| 87.40 (0.20)|\n\n**subfigure (b):**\nConvMixer: 8-block\n\n|Dim| Mask|One-layer|MP|RP_1e-1|RP_1e-2|RP_1e-3|RP_1e-4|RP_1e-5|\n| :----: | :-----: | ------------ | :-----: | :-----: |:-----: |:-----: |:-----: |:-----: |\n| 256| 90.42 (0.09) | 90.47 (0.04) | 90.65 (0.11) | 90.63 (0.14) | 90.59 (0.06) | 90.06 (0.22) | 87.64 (0.19) | 82.34 (0.26)\n| 512| 92.69 (0.11) | 92.71 (0.06) | 93.21 (0.05) | 92.90 (0.07) | 92.88 (0.15) | 92.90 (0.07) | 91.71 (0.20) | 87.40 (0.22)\n\n**subfigure (c):**\nViT: 6-block\n\n|Dim| Mask|One-layer|MP|RP_1e-1|RP_1e-2|RP_1e-3|RP_1e-4|RP_1e-5|\n| :----: | :-----: | ------------ | :-----: | :-----: |:-----: |:-----: |:-----: |:-----: |\n| 256| 76.35 (0.15) | 76 21 (0.14) | 76.70 (0.10) | 77.01 (0.17) | 76.76 (0.21) | 76.80 (0.20) | 64.84 (0.21) | 65.76 (0.23)\n| 512| 80.73 (0.21) | 81.56 (0.25) | 81.50 (0.11) | 81.87 (0.06) | 81.25 (0.13) | 80.98 (0.16) | 79.17 (0.25) | 79.00 (0.14) \n\n**subfigure (d):**\nViT: 8-block\n\n|Dim| Mask|One-layer|MP|RP_1e-1|RP_1e-2|RP_1e-3|RP_1e-4|RP_1e-5|\n| :----: | :-----: | ------------ | :-----: | :-----: |:-----: |:-----: |:-----: |:-----: |\n| 256| 79.21 (0.09) | 79.54 (0.16) | 79.23 (0.22) | 79.30 (0.08) | 79.62 (0.15) | 79.28 (0.14) | 73.79 (0.26) | 71.71 (0.28)\n| 512| 83.44 (0.17) | 83.50 (0.26) | 83.66 (0.12) | 83.67 (0.13) | 83.25 (0.20) | 83.34 (0.19) | 76.73 (0.31) | 78.84 (0.29)\n\nBased on the results shown above, we make several conclusions: 1) Our experimental performance are generally stable across different datasets and different settings. The supplemented results follow the accuracy patterns and support the conclusions provided in our draft; 2) ConvMixer is more stable than ViT backbone across different settings; 3) Overall, along with the decreasing number of unique values in the network (from the left column to the right column of tables), the performance variations increase correspondingly. The limited unique weight values decreases the stability of the network.\n\n", " ## Related works:\nWe appreciate the reviewer providing more related works of our study, which are missed by our current version.\nBoth of them are relevant and valuable with insightful ideas. Due to the 9-page limitation of the revision draft, we discuss these papers in the rebuttal here and we will integrate these dicussions into our final version for a better literature review.\nConcrete discussions are shown as below:\n\n\n\n- **Residual connections encourage iterative inference. International Conference on Learning Representations, 2017:**\nThis paper aims to understand the effective computational machenism of ResNet architecture. It formalize a perspective to study the iterative feature refinement achieved by the residual archtecture and understand the optimization behavior.\nIt shows the residual block encourages the feature to move along the negative gradient of loss. It also empirically shows that the shallow layers in ResNet focus more on representation learning and deeper layers focus on iterative refinement of the learned features. Leveraging on the iterative refinement perspective, this paper explores the residual layer weights sharing strategy. It finds training residual blocks with weights sharing leads to overfitting and proposes a batch normalization based approach to handle this issue. Different from this work, our study explores the representative capacity of given random weights by iteratively learning different masks on them. In this way, the fixed random weights can deliver diverse feature mappings. Naturally, a new model compression paradigm is proposed along with our exploration of the random weights representative capacity.\n\n- **Recurrent convolutions: A model compression point of view. NIPS Workshops: Compact Deep Neural Network Representation with Industrial Applications, 2018:**\nThis work focuses on efficiently using the recurrent convolution (RC) kernels to conduct model compression. Specifically, it uses the same RC kernel and unrolls it multiple times to reduce the layer-wise redundancy in the network. To tackle the performance drop caused by the RC kernel sharing, this paper wisely designs a simple yet effective strategy based on a variant of batch normalization. This variant enables the BN layers of RC kernels can be learned independently. In this way, the performance drop caused by the weights sharing can be recovered. Further, unrolling RC kernels for networks can be employed as a practical strategy for model compression usage. Different from this work, our study explores the capacity random fixed weight, which covers more general network cases (e.g., CNN, Transformer). In addition, leveraged on the proposed new protocol for compression, our compression strategy studies more fine-grained weight sharing with limited unique weight values. The different masks can be conducted on the given fixed weights to output diverse feature mappings. In this way, the model storage is more efficient compared with typical compression approaches and enables us to make model compression under more extreme cases.", " # Response to the Reviewer jejx:\nWe appreciate the reviewer‘s acknowledge of our novelty and detailed comments for us to make further improvement. We summarize the reviewer's comments and make responses as below.\n\n## Title revision:\nWe thank the reviewer pointing out the problem of our current title with detailed suggestions. This point is also mentioned by other reviewers. As the reviewer mentioned, the \"one layer\" causes some confusions and may mislead the understanding of our work. We revised our title as **Iterative Mask Learning on Limited Random Weights** to convey the key factors of our work. Please refer to our responses to the reviewer vyrt and the reviewer PU7f for more discussion details about our title revision.\n\n## More comparisons:\nWe thank the reviewer's constructive comments for our experiments.\nFor the stronger baselines, since our work aims to proposes a new protocol for model compression by representing a sparse model with a small-scale random vector and different masks, it is different from typical model compression framework, which requires to restore the float values and position of sparse networks.\nIn addition, previous model compression works follow different settings.\nFor example, the classic pruning methods are based on pretrained model and recent pruning at initialization methods are based on random initialized network.\nAnother instance is that some compression methods globally pick the unimportant weights to remove based on given criteria, which causes different layer-wise sparsity patterns.\nBut some others pre-define the sparse ratio for each layer (e.g., consistent layer-wise ratio).\nTherefore, it is relatively hard to make a fair and systematical comparsion with more recent advanced compression approches based on our current exploration.\nIn our current version, we construct two baselines with random pruning and magnitude-based pruning which is simple but commonly effective approach for comparison.\nIt demonstrates the effectiveness of our proposed protocol.\nWe expect our work can inspire more detailed explorations in this direction.\nWe leave more fine-grained explorations based on different settings mentioned above in our future works.\nFor other dataset comparisons, we have included the Tiny-imagenet dataset in our supplementary material for large-scale dataset validation. Similarly, we leave the explorations of more challenging datasets with different tasks in our future works.", " # Response to the Reviewer PU7f:\nWe appreciate the reviewer's recognition of our work and valuable suggestions for us to make further improvement. We summarize the reviewer’s comments and respond to them as below.\n\n## Title revision:\nWe agree with the reviewer that our current title may cause some confusions and needs revision. This point is also mentioned by other reviewers. Accordingly, we revised our title as “Iterative Mask Learning on Limited Random Weights”, which covers the key factors of our work. Please refer to our response to the Reviewer vyrt for more details about the title revision.\n\n## Related works:\nWe agree with the reviewer that the related works mentioned in the supplemenary are necessary to be discussed in the main file. These works are based on different settings for different tasks, however, they also conceive the insight of weight reusing based on several iterative strategies. We will add more discussions and integrate them into our related work section.\n\n## Shuffling random weights:\nThis is a good point to explore. We further added some experiments to show the effectiveness of shuffling random weights. These experiments are based on ConvMixer backbone with 6/8 blocks and 256/512 dimensions. We use two RP strategies as RP_1e-1 and RP_1e-2. The results are shown below.\nWe refer to the padding results from our draft.\n\n|RP_1e-1| 6/256|8/256|6/512|8/512|\n| :----: | :-----: |:-----: |:-----: |:-----: |\n| Padding| 89.09 |90.64 | 92.24 | 92.99 |\n| Shuffling| 89.27 | 90.69 | 91.61 | 92.94 | \n\n|RP_1e-2| 6/256|8/256|6/512|8/512|\n| :----: | :-----: |:-----: |:-----: |:-----: |\n| Padding| 88.96 | 90.72 | 92.16 | 93.09 |\n| Shuffling| 89.23 | 90.59 |92.04 | 93.00 |\n\n\nBased on the results shown above, we found there is no consistent and significant changes about performance compared between padding and shuffling strategies. However, the random shuffling will cause additional storage cost as the prototype order has been changed, which is not promising for model compression. We appreciate the reviewer's technically comments for this interesting point. We leave the further exploration of how to leverage more on shuffling the prototype to gain improvement in our future works.\n\n\n## Minors and typos:\nWe appreciate the reviewer's careful reading and pointing our typos. We will further polish our draft to deliver a better version.", " - **Figure 2:**\nOur current caption misses the explanation of certain elements in Fig. 2 such as “prototype” and the color of the vectors. We will supplement them in our final version to eliminate confusions.\n\n- **Prototype usage:**\nWe clarify the usage of our three strategies of using prototype to update network with some demo examples.\n - **One-layer:**\n We provided an example in the current draft (L151 - L154).\n A 5-layer MLP network with dimension (512, 100, 100, 100, 10) contains 4 randomly initialized weights matrices: {$w_1 \\in \\mathbb{R}^{512 \\times 100}$, $w_2 \\in \\mathbb{R}^{100 \\times 100}$, $w_3 \\in \\mathbb{R}^{100 \\times 100}$, $w_4 \\in \\mathbb{R}^{100 \\times 10}$}. \n In this network, $w_1$, $w_2$, and $w_4$ are prototype as they are matrices with different sizes.\n $w_3$ is the target of $w_2$ as it has the same size as $w_2$.\n Thus, the network updated by *one-layer* strategy is {$w_1$, $w_2$, $w_2$, $w_4$}.\n\n - **Max-layer padding (MP):**\n We use the same example as above. The ***max-layer*** with the largest size is $w_1$. Thus, we use $w_1$ as prototype to update other layers. We flatten the $w_1$ to obtain a vector with length $512 \\times 100$. We use the first $100 \\times 100$ elements to replace the original weights in $w_2$ and $w_3$. We use the first $100 \\times 10$ elements to replace the weights in $w_4$. Formally, the network updated by ***max-layer*** strategy is {$w_1$, $w_1[:s_2]$, $w_1[:s_3]$, $w_1[:s_4]$}, where $s_2, s_3, s_4$ are the layer size of $w_2$, $w_3$, and $w_4$ (please note the flattening and reshaping operations are omitted for simplicity).\n\n - **Random vector padding (RP):**\n We use the same example as above. We set the length of random vector as $r$. Still in $w_1$, we truncate the first $r$ elements and obtains $w_1[:r]$ as the prototype (with corresponding flattening) to fill up the whole network. Let us say $r$ is 100 here. We repeat the prototype 512 and 100 times to fill up $w_1$ and $w_2$, respectively.\n\n- **Eq.4 and Eq.7:**\nWe would like to clarify that the rewriting from Eq.4 to Eq.7 is only for the convenient descriptions following Eq.7. We use superscript $d_l$ (in Eq.7) to denote the dimention instead of subscript $l$ (in Eq.4) as the index of layer. The following introductions of max-layer padding (MP) and random-vector padding (RP) can be more convenient in this way. We will clarify this point in our final version to eliminate confusions.\n\n- **Motivations**\nFor the motivations of different choices in the methodology section, please refer to our response of **Writing logic** above for more details. We will integrate this discussion into our final version.\n\n## Results:\n- **Comparisons with Popup and Supermasks:**\nActually, in our experiments, we follow the Popup method as **sparse selection strategy** to achieve our exploration and have compared with the Popup method. In the part of exploring the random weight representative capacity (Sec.4.2), the Popup results are denoted as **Mask** strategy with circle symbols. For the Supermasks approach, since the Popup aims to achieve better sparse network selection and significantly outperforms the performance of Supermasks, we choose to follow the Popup algorithm which is much more promising and did not compare with Supermasks. We will clarify this point and supplement more discussions about Supermasks in our final version.\n\n- **Other datasets:**\nWe have also provided the results on Tiny-imagenet in the supplementary material to support our observation. For more challenging datasets with different tasks, we leave them in our future works. Please refer to our response to the reviewer jejx **(More comparisons)** for more discussions about this point.\n\n- **Std of results:**\nThis point is also suggested by the Reviewer 9nMH, please refer to our response to the Reviewer 9nMH for more details.\n\n- **More insight discussions:**\nWe appreciate the reviewer's contrustive suggestion to foundamentally improve our work. For the key points of the insight discussions for our work, we have summarized them in our response to the point **Introduction revision** above. Please refer to it for more details and we will integrate these discussions in our final version.\n\n## Limitations:\nThe discussions of both limitations of the idea and the societal impacts have already been included in our supplementary material. Please refer to it for more details.", " ## Introduction revision:\nWe agree with the reviewer that a more detailed introduction delivers a better understanding of our work. We briefly emphasize the key points of the introduction revision below and we will integrate them with the current content of the draft into our final version.\nSummarization of introduction discussion: \nOur work mainly contains two parts. The first part explores a scientific question about the representative potential of random weights with limited unique values. There is a series of previous works such as LTH, Supermasks, and Popup. Based on a given and fixed random network, they try to study the subnetwork trainability, subnetwork representative capacity, and finding subnetwork with better representative capacity, respectively. Inspired but different from them, we further explore the maximum representative potential of random weights with limited unique values. Answering this question helps us understand the operation mechanism of a neural network: do neural networks need high-level diversified numerical values to represent semantic patterns? or this capacity can also be achieved by a diversified topological combination (realized by subnetwork mask) based on weights with limited unique values. Along with answering this question, the second part proposes a novel model compression paradigm. Different from the typical model compression protocol which requires to restore the sparse positions with their float values, our paradigm represents a sparse network by a small amount of random weights with different binary masks. Experiments demonstrate the promising compression results of our method, which benefits efficient model storage and transferring. Overall, our work inspires us to further understand the network operation mechanism and proposes a novel model compression paradigm.\n\n## Methodology:\n- **Writing logic:**\nWe briefly clarify our writing logic of the methodology section and we will reorganize it in our final version. Our proposed technical approaches have three versions: one-layer, max-layer padding (MP), and random weight padding (RP). The one-layer strategy is naturally inspired by the fact that recent proposed deep frameworks always contain several repetitive modules sharing the same architecture such as transformer models. It shares the weights among all modules with the same structure. However, some parts of the network use unique structure (e.g., final projection layer in the transformer) and different parts of the network may use different structures (e.g., the multi-head attention block and feed-forward block in the transformer). Therefore, we naturally use the layer with the most number of parameters as a prototype (MP strategy) to fill up the whole network. In this way, the number of unique values in the model is limited by the size of the largest layer. Furthermore, we only initialize a random vector as a prototype to fill up the whole network (RP strategy), which further reduces the unique values in the model. In summary, our writing follows the logic of our exploration, which reduces the unique values in a model to answer the scientific question of random weight representative capacity and explore the novel compression paradigm simultaneously.\n\n", " # Response to the Reviewer vyrt:\n\nWe appreciate the reviewer’s detailed suggestions to help us improve our work. We summarize the reviewer’s comments and make responses as below. Please note that we make careful revisions for several sections based on the suggestions of the reviewer. However, the draft revision has the 9-page limitation which makes it hard to show the revision changes. Therefore, we make response here and leave the draft unchanged.\n\n\n## Inappropriate title:\n\nOur current title “One Layer is All You Need” was expected to deliver that one layer (or a vector in an even smaller granularity) with fixed random parameter values can represent diverse feature mappings.\n\nWe admit this title cannot provide clear and enough information about our research work and needs to be revised for clarity. This point is also suggested by other reviewers. We thank the reviewer’s suggestion, “On learning masking operators for network random weights”, for our title revision. Based on this suggestion, we revise our title as *“Iterative Mask Learning on Limited Random Weights''*. In this way, the title contains necessary factors of our work: 1) We are given a set of fixed random weights, which is small-scale compared with the whole network structure; 2) We iteratively learn different masks on the network augmented by the given random weights for different feature mappings. This version describes our scientific exploration process and the “limited random weights” also indicates the model compression function of our work.\n\n\n## Abstract revision:\n\nWe appreciate the review’s suggestions. It is constructive for a more comprehensive abstract to describe the whole picture of our work. Accordingly, we revised our abstract below where the main revisions are emphasized by boldtype (please note that we also did some trivial polishing but without boldtype).\n\n*Revised abstract:\nA deeper network architecture generally handles more complicated non-linearity and performs more competitively. Recently, advanced network designs often contain a large number of repetitive structures (e.g., Transformer). They empower the network capacity to a new level but also increase the model size inevitably, which is unfriendly to either model restoring or transferring. In this study, **we are inspired by previous works (e.g., Lottery Ticket [7] and Popup[20]) to study the random network capacity. Following this point,** we are the first to investigate the representative potential of fixed random weights with limited unique values by iteratively learning different masks, leading to a new paradigm for model compression to diminish the model size. Concretely, we utilize one random initialized layer, accompanied with different masks, to convey different feature mappings and represent repetitive modules in a deep network. As a result, the model can be expressed as one-layer with a bunch of masks, which significantly reduces the model storage cost. Naturally, we enhance our strategy by learning masks for a model filled by padding a given random weights sequence. In this way, our method can further lower the space complexity, especially for models without many repetitive architectures. **We scientifically explore the potential of random weights by a series of experimental validations and test our proposed compression paradigm based on different network architectures. Compared with typical compression baselines satisfying more accuracy for compression, our method generally achieves better compression-accuracy trade-off based on different settings such as around 10%/6% improvement on CIFAR10 with 96% compression ratio using ConvMixer and ResNet backbones.***\n\n\n\n ", " The paper studies the use of random weights together with learnable masks. The learnable masks are learned with straight through operator. The authors argue that such training approach for neural network would reduce the model storage requirements and has applications to network compression. The model is validated on cifar 10 and cifar 100 datasets showing that the proposed layers underperforms (approximately) between 1 and 10 accuracy points wrt dense layers depending on the model architecture and number of parameters.\n\n-------\nPost rebuttal: Based on authors responses I updated the overall score from 3 to 6 and increased soundness and presentation both by 1. Overall, the discussed ideas are interesting – using a random layer with learnable masks to achieve competitive performance. However, the validation and the presentation of the paper require improvements. For details see comments below.\n\n**Title**\n- The title might be a bit to generic (not very informative) and a bit misleading (for more complex datasets one would probably need more layer architectures). How about the following title: On learning masking operators for network random weights. \n\n**Abstract**\n- To strengthen the abstract, please add quantification of improvements in terms of space complexity. Also, based on the experimental section the improvements come at the expense of model accuracy, however, this trade-off is not captured in the abstract.\n\n**Introduction**\n- Introduction section is in general well written and easy to follow. However, it could benefit from some re-writing, shortening, and refocusing. \n\n- The introduction does not discuss the obtained results making it hard to assess the significance of the proposed approach. It is also unclear what the ML community gains with the results of this study. Please add such discussion to the introduction section.\n\n**Methodology**\n\n- This section would benefit the most from re-writing and restructuring. In the current form it is difficult to follow and do not allow to fully appreciate the presented ideas.\n\n- Figure 2 should be better discussed and better formatted. Please extend the caption to clarify the figure. In general, all figures in the paper should be self-explanatory making it possible to understand the figures just by reading the captions. In its current form it is impossible to understand the drawings.\n\n- Based on the description the process of updating the prototype weights into target weights is unclear. Could the authors clarify how different networks are updated?\n\n- Eq 4 and Eq 7 differ only in the dimensionality of w. Why changing the dimensionality leads to new paradigm of random weights padding? Moreover, please boldface vectors in Eq 7 to differentiate them further from scalars in Eq. 4.\n\n- The section lacks motivations behind different choices. \n\n**Results**\n\n- The introduced layer is not compared to previously published models. Would it be possible to compare the model to Supermaks and Popup? Adding comparisons to previous art would make the validation stronger.\n\n- Cifar datasets are small scale. Would the observations generalize to larger scale datasets? Adding another dataset would make the observations more compelling. \n\n- The results are missing stds, making it hard to assess the significance of the results.\n\n- In general, the proposed layers are underperforming w.r.t dense layers. That would be expected. However, the results section lacks discussion and positioning of the reported results, e.g., Why the reported results are interesting? What do we learn as a community from the results? What is the impact of the reported results? Adding more in-depth discussion would make the paper stronger.\n See above (strengths and weaknesses). - The paper does not discuss the limitations of the discussed ideas.\n\n- The paper does not mention societal impacts.\n", " This paper aims to handle the difficulty of restoring/transmitting models caused by the increasing model size for recent large-scale neural networks. Inspired by recent works (e.g., LTH, Popup) on random networks, the paper starts by answering a scientific question: what is the potential of a random network? Specifically, the authors propose a series of strategies to study the random network with different masks to map different features. Through the exploration for the answer, a new model compression paradigm is proposed by only restoring one-layer random weights and a bunch of masks to represent a model. Experiments were conducted based on using different CNN/transformer architectures. Extensive results validate the rationality of the motivation and show the feasibility of the new compression paradigm. Strengths:\n1) This work tries to reduce the model storage size, which is a clear and practical motivation. Compared with typical model size compression methods that remove partial parameters, it is a novel way to represent a model by using different masks on fixed random weights.\n2) This work is driven by studying the random weight capacity, which is an interesting yet under-explored studying point. It is novel to use “one-layer” weights with different masks to learn a model.\n3) Experiment is extensive using different model architectures. Firstly, it answers the question about random weight potential using a series of proposed strategies to construct a network using random weights. Secondly, it shows the feasibility of a new compression paradigm compared with the typical model compression method.\n\nWeakness:\n1) It is encouraged to revise the draft title to a more appropriate one. After reading the draft, I think the current title doesn’t convey the key factor of this paper. Iteratively selecting different masks on a set of fixed random weights for different feature mappings should be the main point, therefore, the usage of “one-layer” in the title is inaccurate. On the other side, “all you need” is a too vague description. It needs to be concretized to eliminate confusion.\n2) Some related works are supplemented in the appendix, I suggest moving them into the main draft and providing necessary discussions about them. The discussion should include the difference between the submitted work with these existing works since they look highly related to this work, even if they are in a different setting.\n3) Technically, the proposed random vector padding (RP) repeats the given set of random weights in the same order. If randomly shuffling the random set and then doing the padding to construct the model, can it improve the capacity?\n4) Minors: (1) In Alg.3, it seems the output is written in the wrong way, which should be the output of MP strategy in Alg.2, but not consistent with RP strategy in Alg.3. (2) Around Eq. 5 and Eq. 6, the explanation of T^p is missing. It should be further clarified and consistent with Alg.1. (3) In Eq.9, the d_l should be the dimension of w_l instead of the number of vector v_pro. Please make it clear to eliminate confusion. See the above weaknesses. The authors have addressed the limitations and potential negative societal impact.", " This paper proposed a new paradigm for neural network compression. The authors randomly initialize a set of weights. The actual parameters of each layer are represented as the initialized weights with binary masks. The weights are shared by multiple layers, while the masks are different for each layer. The weights are fixed, while the masks are learnable. In this way, the total bytes are significantly reduced. Experiments show that the proposed method achieves better compression than baselines. Strengths:\n\n1. This paper is well organized, and the core method is clearly represented.\n2. This paper represents each layer as shared weights with different masks. The idea for model compression is interesting and novel.\n3. Experiments show that the proposed method achieves good compression for image classification models.\n\nWeaknesses:\n\n1. The title of this paper is unsuitable and the authors should change it. First, people will not associate the title with model compression. Second, the word \"one layer\" in this paper is misleading. Although some parts are shared cross all layers, there are differences between layers. Thus, we can't say them \"one layer\". In my opinion, masks are also parameters of the model.\n2. It is better to compare the compression performance with stronger baselines or bigger datasets such as ImageNet.\n3. In general, the proposed method achieves compression by sharing some parts of parameters (while adjusting the others). Several previous works have explored this direction, such as [1] and [2]. The authors should discuss them in related works.\n\n[1] Residual connections encourage iterative inference. International Conference on Learning Representations, 2017\n\n[2] Recurrent convolutions: A model compression point of view. NIPS Workshops: Compact Deep Neural Network Representation with Industrial Applications, 2018\n\n None (refer to weaknesses) Authors discussed the limitations and potential negative societal impact of their work in supplementary material.", " This paper proposes a new way of representing a neural network in a compressed way coined \"One Layer is All You need\". The idea is to keep a single fixed and randomly initialized weight vector as prototype for each layer of the network, whereas each layer is saved as a learned mask determining which weights of the prototype are used. Since saving bit masks is more memory efficient than floating point, the network can be efficiently stored. Experiments with ResNet32, ResNet56, ConvMixer and ViT on CIFAR10 and CIFAR100 show that this method achieves improved results in terms of accuracy compared to sparse network training baselines while maintaining larger compression ratios. Strengths:\n- {S1} The problem of storing neural networks in an efficient manner is significant and the proposed idea improves in this direction.\n- {S2} The trade-off between network compression and accuracy is improved in comparison to sparse network training baselines.\n- {S3} The writing is well-structured and easy to follow.\n\nWeaknesses:\n- {W1} Experiments only performed on low-resolution datasets (CIFAR10, CIFAR100, TinyImagenet).\n- {W2} It is not clear if experimental settings are repeated with different seeds. The checklist refers to the supplementary material, but I cannot find any results for multiple seeds there either. I believe all experiments should be conducted for multiple seeds.\n- {W3} No code is included for reproducibility.\n- {W4} The writing should be improved in terms of typos and grammar (see below for some instances).\n\nTypos:\n- The sentence in lines 20-21 seems incomplete.\n- Lines 126/150: rewrited -> rewritten\n- Line 259: compreesion -> compression\n- Line 263: The sentence is confusing, because you train two networks with different strategies and not one network with both.\n- Line 307: foundamental -> fundamental\n\n--------------------------------------------------------------------------\n\n{W2} is addressed by the authors during the discussion. Furthermore, they ensured they will resolve {W3} and {W4}. I updated my score respectively. - {Q1} The paper mentions a predefined sparse selection ratio (line 129) which is not further mentioned in the experiments. What value did you set K to for your experiments?\n- {Q2} Given the paper, it is not clear to me how the compression ratio is computed for MP/RP in Figures 4/5. I understand that Eq. 10 is used for previous work, however, the proposed method seems to decouple p in C_w (which depends on the fixed vector size) and C_m (which depends on K as mentioned in Q1). Could you please show how you calculated the compression sizes with an example?\n\nI'm willing to raise my score if my questions and concerns ({W2}) are addressed. The authors discussed limitations in the supplementary material. The fact that this method cannot be used to compress already pretrained models but requires training from scratch is an important limitation and should be mentioned in the main paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "O5N52SwmNvE", "e07jkflbeqH", "1y27vPXQnef", "1v2U7GEqJU4", "cFl87AKce0", "KsrCUGqWTh", "15B1oDmbwym", "8Xv4Qt1Mx3S", "KsrCUGqWTh", "A7Csk-v-i6f", "KsrCUGqWTh", "KsrCUGqWTh", "KsrCUGqWTh", "cFl87AKce0", "cFl87AKce0", "KKKSA4fougQ", "A7Csk-v-i6f", "A7Csk-v-i6f", "A7Csk-v-i6f", "nips_2022_7rcuQ_V2GFg", "nips_2022_7rcuQ_V2GFg", "nips_2022_7rcuQ_V2GFg", "nips_2022_7rcuQ_V2GFg" ]
nips_2022_owZdBnUiw2
Look More but Care Less in Video Recognition
Existing action recognition methods typically sample a few frames to represent each video to avoid the enormous computation, which often limits the recognition performance. To tackle this problem, we propose Ample and Focal Network (AFNet), which is composed of two branches to utilize more frames but with less computation. Specifically, the Ample Branch takes all input frames to obtain abundant information with condensed computation and provides the guidance for Focal Branch by the proposed Navigation Module; the Focal Branch squeezes the temporal size to only focus on the salient frames at each convolution block; in the end, the results of two branches are adaptively fused to prevent the loss of information. With this design, we can introduce more frames to the network but cost less computation. Besides, we demonstrate AFNet can utilize less frames while achieving higher accuracy as the dynamic selection in intermediate features enforces implicit temporal modeling. Further, we show that our method can be extended to reduce spatial redundancy with even less cost. Extensive experiments on five datasets demonstrate the effectiveness and efficiency of our method.
Accept
The proposed architecture, AFNet, is simple yet effective for end-to-end efficient video action recognition. It has good idea, well written paper, and relative solid experimental results to support the claim. The emergency reviewer gives the highest score (6), while the other reviewers do have some concerns with this paper. Most of the other reviewers give the rating of borderline accept, after the authours make more efforts in the rebuttal period. In term of these, the initial recommendation would be acceptance.
val
[ "K3TcQ_uPLrl", "Ek0_5N_fjID", "DeMn7YGAl85", "dUbJwatsor", "G0X-wDDBSY0", "-I5Akhkt9w", "_7VvCnAS4n2", "sWSQaG9pF0P", "gQNub5B7NZ7", "b_yACFh9Kl8d", "flU5hgQuEbu", "qw9RST14dyqv", "E12ZZSHtBe", "IehRtmcVp2q", "XXQ08nsiMPn_", "nlew0T_oSPt", "cT_epe5jl4y", "nQVeX3ms3mvg", "lnB_0omvXct", "VhuuC9g21dm", "nDbfclq5bdS", "4pBLXAb4UFd", "S9rFSipJjSdB", "Ulel-9INRLJ", "4nK3k_53F69", "LvUu_jtL87G", "x5Q0wcuusc", "cK33T7udapT", "w4uRoZaLRHb", "BaOjV3hSrh0", "yiTSSBEOtH-" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer bMpQ:\n\nWe have submitted the final version of our draft just now. Compared with the previous version, we have rewritten the explanations for implicit temporal modeling in Section 3.2 and included the reasons for comparisons with TSN in Table 1.\n\nIn the previous version, we have already added the discussion with SlowFast to emphasize the difference and compared AFNet with SlowFast on Something-Something V2 in Table 2 where our method exhibits advantages both in accuracy and efficiency.\n\nThank you so much for providing valuable comments to us and helping us improve our paper!\n\nSincerely, \nAuthors", " Dear Reviewer aYAw,\n\nWe thank you so much for updating the feedback to us.\n\nBased on your response, we want to kindly clarify several points in case there is a misunderstanding of our work and we hope these can resolve your concerns.\n\n***\n(1) The main reason for the two-branch design in AFNet is to prevent the information loss which we have observed in previous dynamic methods:\n| Method | Frame | mAP | \n| :-----: | ------------ | :-----: |\n| TSN | 16 | 76.9% |\n| SCSampler | 16 | 72.9% |\n| AR-Net | 16 | 73.8% |\n| VideoIQ | 16 | 74.8% |\n| AdaFocus | 16 | 75.0% |\n| AFNet | 16 | 76.6% |\n\nWhen sampling 16 frames on ActivityNet, SCSampler[1], AR-Net[2], VideoIQ[3], and AdaFocus[4] have worse performance compared to TSN[5] as they completely abandon the information that the policy network recognizes as unimportant. However, AFNet exhibits similar performance (slightly inferior) compared with TSN as the two-branch design can effectively avoid the information loss caused by frame selection.\n\nBased on these analyses, we design the two-branch design for AFNet to prevent the loss of information. Therefore, while this structure seems similar to SlowFast[6], we believe the motivation and specific designs are different significantly.\n\n(2) As for SCSampler, it adopts a policy network to help select salient frames which explicitly reduces redundancy in data. Then the newly formed data will be sent to the deep network for classification. This procedure has been adopted by many dynamic methods like AR-Net, VideoIQ, and AdaFocus which we have shown in Figure 1. However, based on previous analysis, this will lead to information loss as the unimportant frames will be completely abandoned and cannot be utilized by the deep network for classification. Moreover, the policy network will introduce extra computations and complicate the training strategies as these methods have to split the training into several stages.\n\nIn contrast, we design an extremely lightweight navigation module (19M FLOPs in a 12-frame network) which can be incorporated into the backbone network and make frame selection at intermediate features. Combining this design with our two-branch structure, we can not only prevent the information loss on focal branch, but enforce implicit temporal modeling (demonstrated in Section 3.2 and Table 1) as well. Moreover, the intermediate frame selection enables our method to be trained in an end-to-end fashion which makes it easy for implementation. \n\n[1] Korbar B, Tran D, Torresani L. Scsampler: Sampling salient clips from video for efficient action recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 6232-6242. \n[2] Meng Y, Lin C C, Panda R, et al. Ar-net: Adaptive frame resolution for efficient action recognition[C]//European Conference on Computer Vision. Springer, Cham, 2020: 86-104. \n[3] Sun X, Panda R, Chen C F R, et al. Dynamic network quantization for efficient video inference[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 7375-7385. \n[4] Wang Y, Chen Z, Jiang H, et al. Adaptive focus for efficient video recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 16249-16258. \n[5] Wang L, Xiong Y, Wang Z, et al. Temporal segment networks: Towards good practices for deep action recognition[C]//European conference on computer vision. Springer, Cham, 2016: 20-36. \n[6] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6202-6211. \n***\n\nWe appreciate it a lot for your further feedback and we will continuously revise the final version to highlight our contributions. Thanks again for your efforts.\n\nSincerely, \nAuthors", " I appreciate that the authors provide a detailed response to my concerns. They conducted additional comparisons with MobileNet v3 and more frames, which verify the effectiveness and efficiency of the proposed method. As for the novelty compared with SlowFast and Scsampler, I still have my concerns despite the differences in static/dynamic schemes and the frame/feature selections between them. I keep my rating as Borderline accept.", " Dear Reviewer bMpQ:\n\nThank you so much for delivering your valuable comments to us. We have tried our best to provide a detailed response to you and revised the draft accordingly based on your concerns.\n\nWe would like to kindly remind you that tomorrow (Aug 9th) is the deadline for the discussion phase and your opinions regarding our response matter a lot to us.\n\nThank you so much for helping us improve our paper.\n\nSincerely, \nAuthors", " Dear Reviewer ztLC:\n\nThank you so much for delivering your valuable comments to us. We have tried our best to provide a detailed response to you and revised the draft accordingly based on your concerns.\n\nWe would like to kindly remind you that tomorrow (Aug 9th) is the deadline for the discussion phase and your opinions regarding our response matter a lot to us.\n\nThank you so much for helping us improve our paper.\n\nSincerely, \nAuthors", " Dear Reviewer aYAw:\n\nThank you so much for delivering your valuable comments to us. We have tried our best to provide a detailed response to you and revised the draft accordingly based on your concerns.\n\nWe would like to kindly remind you that tomorrow (Aug 9th) is the deadline for the discussion phase and your opinions regarding our response matter a lot to us.\n\nThank you so much for helping us improve our paper.\n\nSincerely, \nAuthors", " Dear Reviewer VKUT:\n\nThank you so much for delivering your valuable comments to us. We have tried our best to provide a detailed response to you and revised the draft accordingly based on your concerns.\n\nWe would like to kindly remind you that tomorrow (Aug 9th) is the deadline for the discussion phase and your opinions regarding our response matter a lot to us.\n\nThank you so much for helping us improve our paper.\n\nSincerely, \nAuthors", " Dear Reviewers:\n\nThanks for your valuable comments made in the review process. We have revised the draft based on your suggestions and the revised area is marked in blue color. Specifically, we have:\n***\n* added the discussion with SlowFast[1] in Section 2 and the comparison in Table 2,\n* built AFNet on ResNet-101[2] and MobileNet V3[3] to prove its generalization ability in Section 3 of supplementary material,\n* built AFNet with more frames on ActivityNet to demonstrate its efficiency in Section 4 of supplementary material,\n* tested the practical speedup of AFNet on CPU and GPU and compared it with competing methods in Section 7 of supplementary material,\n* added ablation of fusion strategy and temperature decay schedule in Section 8 of supplementary material,\n* demonstrated our analysis in Section 3.2 by learning soft temporal weights for each video in Section 5 of supplementary material,\n* added the description of computational paradigms for training and inference in Section 6 of supplementary material,\n* revised the typos and unclear description.\n\n[1] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6202-6211. \n[2] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778. \n[3] Howard A, Sandler M, Chu G, et al. Searching for mobilenetv3[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 1314-1324. \n***\n\nThank you so much for being with us so far.\n\nSincerely, \nAuthors", " Dear Reviewer VKUT,\n\nThank you so much for helping improve our paper so far! We have an update of our results on MobileNet V3[1].\n\n***\n**Update of Results on MobileNet V3** \nTo better demonstrate the generalization of AFNet, we build AFNet on MobileNet V3 (MN3) to test its performance on efficient backbones. We conclude both results of ResNet-101[2] and MobileNet V3 on Something-Something V1 in this table:\n| Method | Frame | Top-1 Acc. | GFLOPs |\n| :-----: | ------------ | :-----: | :-----: |\n| TSM_R101 | 8 | 47.2% | 62.8G |\n| AFNet-TSM_R101(RT=0.4) | 8 | 47.2% | 28.0G |\n| TSM_R101 | 12 | 49.1% | 94.2G |\n| AFNet-TSM_R101(RT=0.4) | 12 | 49.8% | 42.1G | \n| |\n| TSM_MN3 | 8 | 42.2% | 1.7G |\n| AFNet-TSM_MN3(RT=0.4) | 8 | 43.6% | 1.5G |\n| TSM_MN3 | 12 | 43.9% | 2.6G |\n| AFNet-TSM_MN3(RT=0.4) | 12 | 45.3% | 2.2G | \n\nFrom this table, we can observe that AFNet continuously improves the accuracy of baseline methods while costing less computation. Specifically, when implementing on stronger backbone ResNet-101, AFNet significantly improves the efficiency by only costing 44.6% and 44.7% of the computations. When we build AFNet on efficient structure MobileNetv3, the saved computations are less obvious as the architecture of the base model is already extremely lightweight. Nevertheless, the improvement in accuracy is more obvious compared to results on ResNet-101 which can be attributed to the effectiveness of the two-branch design and navigation module.\n\n[1] Howard A, Sandler M, Chu G, et al. Searching for mobilenetv3[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 1314-1324. \n[2] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.\n***\n\n\nGiven the NeurIPS final discussion deadline (08/09) is approaching, we really hope to have a further discussion with you to see if our responses solve your concerns.\n\nThank you so much for being with us so far!\n\nSincerely, \nAuthors", " We appreciate a lot for the Reviewer's further feedback. Here are further explanations for the information loss.\n\n***\n***If it is necessary to utilize dense frames*** \nA straightforward ablation is to remove the two-branch design and only use a single branch to conduct intermediate frame selection. The result is shown in Table 5: the single branch obtains much worse results than the two-branch design, especially when the selection ratio is small. This demonstrates that the two-branch structure is necessary as completely abandoning the information at ample branch will lead to much worse results.\n\n***Visualization of what kind of information is contained in unimportant frames*** \nThanks for this question. However, it is hard to analyze the information in unimportant frames as different frames will be selected at different layers. Therefore, we analyze the learned policy by calculating the distributions of RT at different stages. In Figure 5, we can observe a decreased trend in RT for all curves which means that AFNet tends to select more frames at earlier layers and skip more at later stages. It can be explained that earlier layers mostly capture low-level information which has relatively large divergence among different frames. While high-level semantics between different frames are more similar, therefore AFNet tends to skip more at later convolution blocks.\n\n***Using randomly sampled unimportant frames instead of all frames*** \nIf we understand the Reviewer's comment correctly, we are suggested to use the unimportant frames on ample branch instead of all frames. In our original implementation, we will learn binary temporal masks $L_n$ to select important frames. Therefore, we can generate complementary masks $L_n'$ to choose the unimportant frames on ample branch. The results are shown in table:\n| Method | Frame | mAP | GFLOPs |\n| :-----: | ------------ | :-----: | :-----: |\n| AFNet(RT=0.5)_all_frames | 12 | 74.3% | 29.7G |\n| AFNet(RT=0.5)_unimportant_frames | 12 | 72.3% | 28.4G |\n\nWe can see that using unimportant frames on ample branch clearly lead to worse performance in accuracy. This demonstrates the necessity to keep all frames on the ample branch. Moreover, utilizing all the frames on ample branch does not cause too much difference in computational costs (only 1.3G for 12 frames). The reason is that the ample branch is designed to be very lightweight as it will downsample the features and reduce the channel size. Therefore, the two-branch design is effective as it prevents the information loss while only costs small computations.\n\nFurthermore, we have demonstrated in Table 5 that our navigation module is also effective as it exhibits better performance compared to other selection strategies.\n***\n\nBased on these points above, we think it is necessary to utilize all frames on the ample branch to prevent the information loss.\n\nThank you so much for delivering further feedback to us. If you have further comments or concerns, feel free to ask and we are willing to have further discussion with you. Thanks again.", " Thank the authors for the detailed response. I still have a comment about the description of information loss. As the authors mention that the information loss is resulted from the fact that 'unimportant frames or regions are completely abandoned', which is reasonable. However, I am not sure if it is necessary to utilize dense frames as adopted in this paper to enhance the model since these unimportant frames are generally redundant and can hardly provide useful information for the target task. Therefore I asked in my former review that it would be better to have visualization or a good measure of what kind of information is contained in these unimportant frames. Another good experiment is to verify to what extent using randomly sampled unimportant frames instead of all frames can worsen the performance of the current method.", " Dear Reviewer aYAw,\n\nThank you so much for helping improve our paper so far! We have an update of our results on MobileNet V3[1].\n\n***\n**Update of Results on MobileNet V3** \nAs we promised in Q5, we will build AFNet on MobileNet V3 (MN3) to test the generalization of our method on efficient backbones. Moreover, we have implemented AFNet on stronger backbone ResNet-101 (R101)[2] and we conclude both results on Something-Something V1 in this table:\n| Method | Frame | Top-1 Acc. | GFLOPs |\n| :-----: | ------------ | :-----: | :-----: |\n| TSM_R101 | 8 | 47.2% | 62.8G |\n| AFNet-TSM_R101(RT=0.4) | 8 | 47.2% | 28.0G |\n| TSM_R101 | 12 | 49.1% | 94.2G |\n| AFNet-TSM_R101(RT=0.4) | 12 | 49.8% | 42.1G | \n| |\n| TSM_MN3 | 8 | 42.2% | 1.7G |\n| AFNet-TSM_MN3(RT=0.4) | 8 | 43.6% | 1.5G |\n| TSM_MN3 | 12 | 43.9% | 2.6G |\n| AFNet-TSM_MN3(RT=0.4) | 12 | 45.3% | 2.2G | \n\nFrom this table, we can observe that AFNet continuously improves the accuracy of baseline methods while costing less computation. Specifically, when implementing on stronger backbone ResNet-101, AFNet significantly improves the efficiency by only costing 44.6% and 44.7% of the computations. When we build AFNet on efficient structure MobileNetv3, the saved computations are less obvious as the architecture of the base model is already extremely lightweight. Nevertheless, the improvement in accuracy is more obvious compared to results on ResNet-101 which can be attributed to the effectiveness of the two-branch design and navigation module.\n\n[1] Howard A, Sandler M, Chu G, et al. Searching for mobilenetv3[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 1314-1324. \n[2] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.\n***\n\n\nGiven the NeurIPS final discussion deadline (08/09) is approaching, we really hope to have a further discussion with you to see if our responses solve your concerns.\n\nThank you so much for being with us so far! Have a wonderful weekend!\n\nSincerely, \nAuthors", " Dear Reviewer bMpQ,\n\nThank you so much for helping improve our paper so far!\n\nGiven the NeurIPS final discussion deadline (08/09) is approaching, we really hope to have a further discussion with you to see if our responses solve your concerns.\n\nThank you so much for being with us so far! Have a wonderful weekend!\n\nSincerely, \nAuthors", " Dear Reviewer ztLC,\n\nThank you so much for helping improve our paper so far!\n\nGiven the NeurIPS final discussion deadline (08/09) is approaching, we really hope to have a further discussion with you to see if our responses solve your concerns.\n\nThank you so much for being with us so far! Have a wonderful weekend!\n\nSincerely, \nAuthors", " Dear Reviewer ChDo,\n\nThank you so much for helping improve our paper so far!\n\nGiven the NeurIPS final discussion deadline (08/09) is approaching, we really hope to have a further discussion with you to see if our responses solve your concerns.\n\nThank you so much for being with us so far! Have a wonderful weekend!\n\nSincerely, \nAuthors", " **Significance: Comparison with SlowFast.** \nFirst, we want to stress that the main focus of our paper is on efficiency and the main comparisons in our paper should be with other efficient dynamic networks which is the common practice of this area [4],[5],[6]. Therefore, we conduct experiments on datasets like Mini-Kinetics, ActivityNet instead of Kinetics-400, Kinetics-600 in order to compare with other dynamic methods.\n\nThe result of SlowFast-ResNet50 on Something-Something v2 in Multiview Transformer[9] comes from Table 5 in Multigrid training[10]. The accuracy of 61.7 is not the baseline result but the result of the model trained with Multigrid training. Therefore, we should use the accuracy of 60.9 for a fair comparison. Besides, it is included in [10] that the model uses the speed ratio a=4 and channel ratio of b=1/8 with 64 frames on the fast pathway. Therefore, there will be 16 frames on slow pathway and the GFLOPs should be much greater than 36.1 (channel ratio of b=1/8, 4 frames on slow pathway, and 32 frames on fast pathway). Moreover, we have discovered from the official website of PyTorch video (https://pytorchvideo.readthedocs.io/en/latest/model_zoo.html) that SlowFast-R50 achieve the accuracy of 61.68 with a GFLOPs of 66.60x3. We conclude the results on Something-Something v2 in this table:\n| Method | Pretrain | Top-1 Acc. | GFLOPs |\n| :-----: | ------------ | :-----: | :-----: |\n| SlowFast | Kinetics400 | 60.9% | >36.1G |\n| SlowFast | Kinetics400 | 61.7% | 66.6x3G |\n| AFNet-TSM(RT=0.4) | ImageNet | 61.3% | 27.8G |\n| AFNet-TSM(RT=0.8) | ImageNet | 62.5% | 31.7G | \n\nWe can observe that AFNet-TSM exhibits clearly better performance on this dataset in both accuracy and efficiency. Furthermore, SlowFast is pretrained on Kinetics400, and AFNet is pretrained on ImageNet. We believe our method will obtain higher accuracy if we pretrain on Kinetics400. We thank the Reviewer for pointing it out and we will include this part in our final version. \n\n**Question: Why compare with TSN?** \nThanks for this question. As we have explained in reply to implicit temporal modeling, the reason we compare with TSN is that AFNet does not design any temporal modeling module which is the same as TSN and both methods use ResNet-50[11] as the backbone for classification. Table 1 is used to prove that the navigation module enforces implicit temporal modeling in AFNet as it exhibits much higher accuracy compared with TSN when selecting fewer frames. With this design, we can achieve less is more: seeing fewer frames with higher accuracy.\n\n\nWe hope the explanations can address your concerns and we would appreciate it a lot if you can recognize the contributions of our work.\n\n\n[1] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6202-6211. \n[2] Lin J, Gan C, Han S. Tsm: Temporal shift module for efficient video understanding[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 7083-7093. \n[3] Li Y, Ji B, Shi X, et al. Tea: Temporal excitation and aggregation for action recognition[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 909-918. \n[4] Meng Y, Lin C C, Panda R, et al. Ar-net: Adaptive frame resolution for efficient action recognition[C]//European Conference on Computer Vision. Springer, Cham, 2020: 86-104. \n[5] Wang Y, Chen Z, Jiang H, et al. Adaptive focus for efficient video recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 16249-16258. \n[6] Sun X, Panda R, Chen C F R, et al. Dynamic network quantization for efficient video inference[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 7375-7385. \n[7] Wang L, Xiong Y, Wang Z, et al. Temporal segment networks: Towards good practices for deep action recognition[C]//European conference on computer vision. Springer, Cham, 2016: 20-36. \n[8] Jang E, Gu S, Poole B. Categorical reparameterization with gumbel-softmax[J]. arXiv preprint arXiv:1611.01144, 2016. \n[9] Yan S, Xiong X, Arnab A, et al. Multiview transformers for video recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 3333-3343. \n[10] Wu C Y, Girshick R, He K, et al. A multigrid method for efficiently training video models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 153-162. \n[11] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778. ", " We appreciate the Reviewer's careful comments to point out our unclear description and we make the response as below.\n\n\n**Originality: Discussion with SlowFast[1].** \nAFNet looks similar to SlowFast as both adopt a two-branch design. Nevertheless, we want to stress that the motivation for the two-branch structure of the two works is totally different. For SlowFast, it is designed to capture semantic information and changing motion with branches at different temporal speeds for better performance. To achieve this, it adopts fixed temporal sampling rates on both branches and uses 3D convolutions for temporal modeling. As for AFNet, the focus of this paper is on efficiency, and the two-branch architecture is motivated by the phenomenon that most existing dynamic networks for video recognition will lead to information loss (more detailed analysis can be found in our reply to Weakness 1 pointed out by Reviewer ChDo). Therefore, we design a very lightweight ample branch to prevent the information lost in focal branch. Besides, we do not design any temporal modeling module like other static 2D methods TSM[2], TEA[3] which is the common practice in this area (see papers like AR-Net[4], AdaFocus[5], VideoIQ[6]). More discussion with SlowFast can be found in our reply to Weakness 1 pointed out by Reviewer VKUT. We thank the Reviewer for pointing this out and we will add the discussion in our revision.\n\n**Clarity: Implicit temporal modeling.** \nSorry for the inconvenience brought in reading. We will try to clarify this part here. First, we want to reemphasize that the focus of our work is on efficiency, and we do not explicitly design any temporal modeling module like other 2D methods TSM, TEA. Therefore, a fair baseline for AFNet should be TSN[7] as both methods simply average the predictions of each frame to present the final prediction of the corresponding video. \n\nThough, we demonstrate that AFNet enforces implicit temporal modeling by the navigation module. In Equation (14), $L_n$ is a binary temporal mask which will decide whether the coefficient will be calculated for each frame at every convolutional block. \nFor each stage, it is made up of multiple convolutional blocks and each frame at every block will be assigned a binary weight (1 or 0) by the temporal mask $L_n$. Though there is only a binary choice for the weights at each block, the weights will approximate soft temporal weights if we consider the whole stage in a long run. Moreover, in Equation (3), the logits $p_n^t$ for mask generation are produced by $W_2$ which models the relations between frames. \nTherefore, the temporal mask $L_n$ has taken the temporal information into account and the series multiplication of the output of each convolutional block results in learnable temporal weights, which we regard as implicit temporal modeling. \nThis part can be proved by the experiments in Table 1 as AFNet exhibits much higher accuracy compared with TSN when selecting fewer frames. \nHowever, if all the frames are chosen, all temporal masks will be filled with the value of 1 which does not cause a temporal divergence between frames.\n\n**Clarity: Sampling from Gaussian distribution.** \nThanks for noticing the details in our experiments. Assume we will sample $N$ frames for each video. First, we build a gaussian function with the mean value of $(N+1)/2$ and the standard deviation value of $N/6$. Then, we will get the probabilities of sampling each frame by inputting the frame index from $1$ to $N$ to the gaussian function. After that, we have two implementations for the sampling procedure. The first one is to select top-k frames based on the probabilities we get. However, the selected frames are fixed as the probabilities do not change. Therefore, we have introduced Gumbel softmax[8] to approximate the probabilities from the gaussian function and then select the top-k frames based on the generated probabilities. Note that the generated probabilities are not fixed as it contains Gumbel noise. We have tested both methods and the second implementation achieves slightly better performance, so we reported their results in Table 5.", " \n**Question 1: Generation of L_n.** \n$L_n$ is a binary mask drawn from the distribution $\\pi$ shown in Equation (5). Specifically, we generate logits $p_n^t$ ($1 \\times 2\\times T \\times 1 \\times 1$) for each video to decide whether to choose each frame and use argmax to make it a one-hot tensor. Then, we slice the first dimension of the one-hot tensor to get the temporal mask $L_n$ ($1 \\times T \\times 1 \\times 1$). As argmax is non-differentiable, we adopt Gumbel softmax[6] to allow discrete decisions in forward propagation and estimate gradients in backward propagation.\n\n**Q2: Revision on Equation (2).** \nThanks for the suggestion and we will revise it in our final version.\n\n\n**Q3: Temperature decay schedule.** \nIn all experiments, we let the temperature decay exponentially and we compare it with two variants including decay with a cosine shape and decay linearly on the dataset of ActivityNet:\n| Method | mAP | \n| :-----: | ------------ |\n| AFNet(RT=0.5)_exp | 74.3% |\n| AFNet(RT=0.5)_linear | 73.8% |\n| AFNet(RT=0.5)_cos | 74.1% |\n\nThe results show that our strategy obtain the highest accuracy compared to the other two variants and we will add this part in our final version.\n\n**Q4: Results in Table 1 are surprising.** \nThe reason for the significant improvement is that AFNet enforces implicit temporal modeling compared with TSN[7] which does not build any temporal modeling module. \nIn Equation (14), $L_n$ is a binary temporal mask which will decide whether the coefficient will be calculated for each frame at every convolutional block. \nFor each stage, it is made up of multiple convolutional blocks and each frame at every block will be assigned a binary weight (1 or 0) by the temporal mask $L_n$. Though there is only a binary choice for the weights at each block, the weights will approximate soft temporal weights if we consider the whole stage in a long run. Moreover, in Equation (3), the logits $p_n^t$ for mask generation are produced by $W_2$ which models the relations between frames. \nTherefore, the temporal mask $L_n$ has taken the temporal information into account and the series multiplication of the output of each convolutional block results in learnable temporal weights, which we regard as implicit temporal modeling. \nTherefore, the results in Table 1 can be regarded as proof of our analysis in Section 3.2. Moreover, we have tested to modify the navigation module and change it to learn a soft temporal weight for each frame which obtain similar results in Table 1. This part can be found in our reply to Question 2 pointed out by Reviewer VKUT.\n\n\nWe hope the explanations can address your concerns and we would appreciate it a lot if you can recognize the contributions of our work.\n\n\n[1] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778. \n[2] Meng Y, Lin C C, Panda R, et al. Ar-net: Adaptive frame resolution for efficient action recognition[C]//European Conference on Computer Vision. Springer, Cham, 2020: 86-104. \n[3] Sun X, Panda R, Chen C F R, et al. Dynamic network quantization for efficient video inference[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 7375-7385. \n[4] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6202-6211. \n[5] Yeung S, Russakovsky O, Mori G, et al. End-to-end learning of action detection from frame glimpses in videos[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2678-2687. \n[6] Jang E, Gu S, Poole B. Categorical reparameterization with gumbel-softmax[J]. arXiv preprint arXiv:1611.01144, 2016. \n[7] Wang L, Xiong Y, Wang Z, et al. Temporal segment networks: Towards good practices for deep action recognition[C]//European conference on computer vision. Springer, Cham, 2016: 20-36. ", " We thank for the Reviewer pointing out the drawback of our draft and we make the response as below.\n\n**Weakness 1: Practical Speedup.** \nWe thank the Reviewer for this comment. First, we want to clarify that the frame selection process in our method is not technically implemented as sparse convolution. As AFNet is 2D based model which uses 2D convolutions, the temporal dimension is put on batch dimension in real implementation. Therefore, the frame selection can be easily implemented by the slicing operation on batch dimension and this process will not hurt the computational graph of vanilla convolution, unlike other pruning methods. In this way, we can achieve efficiency by only computing the selected frames at each convolutional block.\n\nFurther, we test the CPU (Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz) and GPU (NVIDIA GeForce GTX TITAN X) inference time of the competing methods listed in Table 2. Note that all methods are implemented on ResNet-50[1] and we use the original code provided by the authors. We sample 12 frames with the input size of 224x224 for all methods for fair comparison and get the average inference time over 100 runs:\n| Method | GPU(ms) | CPU(ms) | \n| :-----: | ------------ | :-----: |\n| bLVNet-TAM | 37 | 798 |\n| TANet | 27 | 595 |\n| SmallBig | 66 | 1268 |\n| TEA | 40 | 755 |\n| TSM | 22 | 633 |\n| AdaFocus-TSM | 45 | 744 |\n| AdaFuse-TSM | N/A | N/A |\n| AFNet-TSM(training) | 52 | 518 |\n| AFNet-TSM(inference) | 32 | 422 |\n\n\nWe can see from the table that AFNet achieves the fastest speed on CPU, making it favorable for employment on edge devices. While the GPU speed of AFNet is not as good as the speedup on CPU which shows inferior performance to static method TSM and TANet. The potential explanation is that the two-branch structure is less favorable in GPU acceleration and we do not have hardware-oriented optimization in our implementation yet. However, AFNet-TSM costs less inference time than dynamic method AdaFocus-TSM in both CPU and GPU. As for another dynamic method AdaFuse, the author does not provide the code for efficient inference, so we do not test their speed. Further, we compare the speed of AFNet-TSM in training mode (simply adding a temporal mask) with inference mode (only computing on salient frames) to demonstrate that the frame selection indeed boosts efficiency (more detailed explanations of the two modes can be found in our reply to Question 3 pointed out by Reviewer ChDo).\n\n**W2: Accuracy/computation gain not significant.** \nFirst, we want to stress that we did not make the claim that our method is significantly better than other approaches as previous works already push the line to a very high standard. However, we want to emphasize that our work offers a new perspective from other dynamic methods as the two-branch structure prevents the information loss (detailed analysis can be found in our reply to Weakness 1 pointed out by Reviewer ChDo). Moreover, the frame selection at intermediate features not only enforces implicit temporal modeling which is seldom touched in this area, but enables end-to-end training as well. This is also an advantage over many other dynamic methods like AR-Net[2], VideoIQ[3] as they have to split the training into multiple stages which makes them hard to implement.\n\n**W3: No promise on code release.** \nThanks for bringing it up. We have provided the code during submission and will release the code once this paper is accepted.\n\n**W4: Missing related works.** \nWe thank the Reviewer for pointing this out. \nHowever, the differences between SlowFast and AFNet are: (1) network category: SlowFast[4] is a static 3D model, but AFNet is a dynamic 2D network;\n(2) motivation: SlowFast is designed to capture semantic information and changing motion with branches at different temporal speed for better performance, while AFNet is aimed to dynamically skip frames to save computation and the design of two-branch structure is to prevent the information loss;\n(3) specific design: AFNet is designed to downsample features for efficiency at ample branch while SlowFast processes features in the original resolution;\n(4) temporal modeling: SlowFast applies 3D convolutions for temporal modeling, AFNet is a 2D model which is enforced with implicit temporal modeling by the designed navigation module. \nWe will add this part in our final version and also cite \"End-to-end Learning of Action Detection from Frame Glimpses in Videos\"[5].\n\n**W5: Writing/Typos.** \nWe will revise the typos and modify the mentioned sentence for clear descriptions. Thanks for the advice.", " [1] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6202-6211. \n[2] Meng Y, Lin C C, Panda R, et al. Ar-net: Adaptive frame resolution for efficient action recognition[C]//European Conference on Computer Vision. Springer, Cham, 2020: 86-104. \n[3] Wang Y, Chen Z, Jiang H, et al. Adaptive focus for efficient video recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 16249-16258. \n[4] Sun X, Panda R, Chen C F R, et al. Dynamic network quantization for efficient video inference[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 7375-7385. \n[5] Kondratyuk D, Yuan L, Li Y, et al. Movinets: Mobile video networks for efficient video recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 16020-16030. \n[6] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778. \n[7] Wang L, Xiong Y, Wang Z, et al. Temporal segment networks: Towards good practices for deep action recognition[C]//European conference on computer vision. Springer, Cham, 2016: 20-36. \n[8] Lin J, Gan C, Han S. Tsm: Temporal shift module for efficient video understanding[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 7083-7093. \n[9] Howard A, Sandler M, Chu G, et al. Searching for mobilenetv3[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 1314-1324. \n[10] Feichtenhofer C, Pinz A, Zisserman A. Convolutional two-stream network fusion for video action recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 1933-1941. \n[11] Liu K, Liu W, Gan C, et al. T-C3D: Temporal convolutional 3D network for real-time action recognition[C]//Proceedings of the AAAI conference on artificial intelligence. 2018, 32(1). \n[12] Korbar B, Tran D, Torresani L. Scsampler: Sampling salient clips from video for efficient action recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 6232-6242.", " **Q4: Fewer frames, higher accuracy.** \nThanks for pointing out this phenomenon. However, we want to clarify that we did not make the claim that selecting fewer frames will lead to higher accuracy. As we have mentioned in Section 3.2, the learned binary mask will decide a coefficient whether to be calculated for every frame at each block which results in learned temporal weights for implicit temporal modeling. Therefore, selecting a ratio of 0.25 and 0.5 frames will achieve much higher accuracy than selecting all frames (choosing all frames leads to a fixed weight for all frames). A possible explanation for the accuracy of AFNet(RT=0.25) being higher than AFNet(RT=0.5) is that a temporal mask which selects fewer frames can better restrain the noise of redundant frames.\n\n**Q5: Comparison with MoViNet.** \nThanks for pointing it out. Though, we want to emphasize that this paper is aimed at developing efficient algorithms rather than building strong temporal modeling structures, and the main comparisons in this work should be AFNet with other dynamic methods, which is the common practice in this area (see papers like AR-Net[2], AdaFocus[3], VideoIQ[4]). \nIndeed, MoViNet[5] is a very efficient network for video recognition. However, it adopts 3D convolutions for temporal modeling, while AFNet is built on 2D network ResNet-50[6] which does not involve any temporal modeling module like other dynamic methods[2],[3],[4]. Besides, MoViNet is a NAS product which costs unaffordable computational cost in training for many researchers, while AFNet is a hand-crafted structure that can easily be trained in an end-to-end fashion like other static methods TSN[7], TSM[8]. Moreover, the search space of MoViNet is built on MobileNet v3[9], but AFNet is based on ResNet-50 which makes the comparison unfair. \nHowever, we are running experiments which build our designed AF module on MobileNet v3 to test the generalization of our method on efficient backbones. We will add the results in our revision.\n\n**Q6: Comparison with Two-stream network[10] and T-C3D[11].** \nAs we mentioned in the reply to Question 5, the main comparisons in our paper should be with other dynamic methods and we only conduct experiments on datasets that are used in previous works[2],[3],[4]. In Two-stream Network and T-C3D, the results are only based on UCF101 and HMDB51. Therefore, we further conducted experiments on these two datasets and we report the mean accuracy across 3 splits in this table:\n| Method | Pretrain | UCF101 Top-1 Acc. | HMDB Top-1 Acc. |\n| :-----: | ------------ | :-----: | :-----: |\n| Two-stream(S:VGG-16 T:VGG-M) | Kinetics400 | 90.8% | 62.1% |\n| Two-stream(S:VGG-16 T:VGG-16) | Kinetics400 | 92.5% | 65.4% |\n| T-C3D | Kinetics400 | 92.5% | 62.4% |\n| TSM | Kinetics400 | 95.9% | 73.5% |\n| AFNet-TSM | Mini-Kinetics | 91.5% | 65.3% | \n\nOur performance does not achieve the best as we only have the pretrained model on Mini-Kinetics which only have half the class compared with kinetics400. TSM has clear advantages over the other two methods, and our model can outperform TSM on accuracy and efficiency on Something-Something. Therefore, we believe our method can potentially get much better results if AFNet can be pretrained on Kinetics400. Nevertheless, we want to stress again that the comparison with other static methods which is designed for better temporal modeling is not a common practice in this area.\n\n\nWe hope the explanations can address your concerns and we would appreciate it a lot if you can recognize the contributions of our work.", " We appreciate the Reviewer's approval and valuable comments. We respond to the Reviewer's concerns as below.\n\n**Weakness 1: Technical Contribution.** \nThough AFNet seems similar to SlowFast[1], we stress that the motivation of the two-branch structure and specific designs of our method are different from it. AFNet is a dynamic 2D network which adaptively selects salient frames to achieve efficiency, while SlowFast is a static 3D model which captures semantic information and changing motion to obtain higher accuracy. \nOther differences are: (1) specific design: AFNet is designed to downsample features for efficiency at ample branch while SlowFast processes features in the original resolution;\n(2) temporal modeling: SlowFast applies 3D convolutions for temporal modeling, AFNet is a 2D model which is enforced with implicit temporal modeling by the designed navigation module. We thank the Reviewer for pointing it our and we will supplement this part in our final version.\n\n**W2: Discussion of SCSampler[12].** \nIndeed, SCSampler utilizes a tiny network to do frame selection and sends the selected salient frames into a larger network to do action recognition. Though, this process will lead to information loss as the unselected frames will be completely abandoned (more analysis can be found in our reply to Weakness 1 pointed out by Reviewer ChDo). However, we want to emphasize that our approach is different from this line of research as the frame selection process of AFNet is at intermediate features. This brings three effects: \n(1) the dynamic frame selection at intermediate features will empower the model with strong flexibility as different frames will be selected at different layers;\n(2) the temporal masks at different convolutional blocks will result in implicit temporal modeling which is demonstrated in Section 3.2 and the results in Table 1;\n(3) the information loss caused by frame selection on focal branch can be compensated by the features in the ample branch to prevent the information loss. \nWe will include the discussion in our final version.\n\n**W3: Experiments with more frames.** \nWe conduct experiments on ActivityNet with more sampled frames:\n| Method | Frame | mAP | GFLOPs |\n| :-----: | ------------ | :-----: | :-----: |\n| TSN | 16 | 76.9% | 65.6G |\n| AFNet(RS=0.4,RT=0.8) | 16 | 76.6% | 32.9G |\n| TSN | 32 | 78.0% | 131.2G |\n| AFNet(RS=0.4,RT=0.8) | 32 | 77.6% | 60.9G |\n\nWhen sampling more frames, AFNet costs significantly less computations compared to baseline method with only a slight drop in performance which can be attributed to two-branch structure as it avoids the information loss. Thanks for the advice and we will add this part in our revision.\n\n**Question 1: Learned knowledge at different stages.** \nThanks for this question and we think it is important to analyze the learned policy to gain insight for the research community. In Figure 5, we calculate and show the distribution of RT at different stages. We can observe a decreased trend in RT for all curves which means that AFNet tends to select more frames at earlier layers and skip more at later stages. It can be explained that earlier layers mostly capture low-level information which has relatively large divergence among different frames. While high-level semantics between different frames are more similar, therefore AFNet tends to skip more at later convolution blocks.\n\n**Q2: Discussion on Equation (3).** \nThe generated logits $p_n^t$ for each video have the shape as: $1\\times(2\\times T)\\times1\\times1$ which means that there will be two values assigned for each frame: $1\\times2\\times1\\times1$. This vector denotes the possibility of whether to choose this frame and we send it to Gumbel Softmax module to generate a one-hot vector (which is the temporal mask $L$) for latter frame selection.\n\n**Q3: Hyperparameter RT.** \nThe introduced hyperparameter RT is necessary for our method and we think it is the key for AFNet to select fewer frames for lower costs. By adding the second term in Equation (16), the network will be enforced to reduce $r$ (the ratio of selected frames in the network) to RT which is set before the training. For example, if we want to AFNet to achieve smaller costs, we can set RT to 0.3 so that $r$ will be forced to decrease from 1 to 0.3 in the training process. If we do not introduce RT and only use cross entropy loss, $r$ will remain nearly 1 during training as selecting all the frames potentially leads to better accuracy and smaller cross entropy loss which prevents AFNet to achieve smaller costs.", " \n**Q3: Misaligning in temporal dimension.** \nThanks for asking this question. Indeed, the navigation module will select different frames at different convolutional blocks. During training, the unselected frames will be masked by zero values so that the features will have the same shape and we can directly add them to the residual feature. \nTo avoid computation on the unimportant frames during inference, we will extract the salient frames from the original feature (slice operation on temporal dimension) before sending it to the convolutions. After that, we will create a zero tensor and rearrange the processed frames to the original temporal locations based on the learned mask.\nIn this manner, we can prevent the misaligning during training and achieve efficiency during inference.\n\n**Q4: Equation 3 plays a role in temporal modeling.** \nYes, $W_2$ in Equation (3) models the temporal relations between frames by transforming the temporal order to channel dimension. However, this logit is used to generate a binary temporal mask for frame selection, instead of used for temporal modeling. Even though, we have demonstrated in Section 3.2 that the binary mask results in learnable temporal weights which implicitly enforce temporal modeling. The results in Table 1 also prove this point as AFNet outperforms TSN by 9.1% due to implicit temporal modeling. \n\n\nWe hope the explanations can address your concerns and we would appreciate it a lot if you can recognize the contributions of our work.\n\n\n[1] Meng Y, Lin C C, Panda R, et al. Ar-net: Adaptive frame resolution for efficient action recognition[C]//European Conference on Computer Vision. Springer, Cham, 2020: 86-104. \n[2] Wang Y, Chen Z, Jiang H, et al. Adaptive focus for efficient video recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 16249-16258. \n[3] Sun X, Panda R, Chen C F R, et al. Dynamic network quantization for efficient video inference[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 7375-7385. \n[4] Wang L, Xiong Y, Wang Z, et al. Temporal segment networks: Towards good practices for deep action recognition[C]//European conference on computer vision. Springer, Cham, 2016: 20-36. \n[5] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6202-6211. \n[6] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7132-7141. \n[7] Li X, Wang W, Hu X, et al. Selective kernel networks[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 510-519.", " We appreciate the Reviewer's feedback. We make further explanations to clarify the Reviewer's concerns based on several key points as below.\n\n**Weakness 1: Description not clear enough.** \nAs shown in Figure 1, many existing dynamic networks (e.g., AR-Net[1], AdaFocus[2], VideoIQ[3]) adopt a policy network to select salient regions, frames, or proper resolutions for each frame which explicitly reduces redundancy in data. Then the newly formed data will be sent to the deep network for classification. \nFor example, when sampling 16 frames from the original video, static methods like TSN[4] will directly use the 16 frames as the input of the deep network. While most dynamic approaches will first preprocess the 16 frames (e.g., selecting salient frames or using smaller resolutions for unimportant frames) and then send the new formed data into the deep network. \nIn this manner, lots of computations will be saved. Though, there will be information lost during the preprocessing phase as the unimportant frames or regions are completely abandoned. This phenomenon motivates us to design AFNet which adopts a two-branch architecture to avoid the information loss. \nWe conduct a pilot study on ActivityNet to compare multiple dynamic approaches with the baseline network TSN in the table below. Note that all methods use ResNet-50 as the backbone for classification and do not involve any temporal modeling module.\n| Method | Frame | mAP | \n| :-----: | ------------ | :-----: |\n| TSN | 16 | 76.9% |\n| AR-Net | 16 | 73.8% |\n| VideoIQ | 16 | 74.8% |\n| AdaFocus | 16 | 75.0% |\n| AFNet | 16 | 76.6% |\n| |\n| TSN | 32 | 78.0% |\n| AFNet | 32 | 77.6% |\n\nWhen sampling 16 frames, AR-Net, VideoIQ, and AdaFocus have worse performance compared to TSN as they use smaller resolutions or small crops of the original data. \nHowever, AFNet exhibits similar performance (slightly inferior) compared with TSN as the two-branch design can effectively prevent the information loss caused by frame selection. Besides, it is worth noting that our computations are much smaller compared with TSN. Furthermore, we conduct experiments on 32 frames and the phenomenon is similar to 16 frames.\n\n**W2: Discussion with SlowFast[5].** \nWe thank the Reviewer for pointing it out. However, the differences between the two methods are: (1) network category: SlowFast[5] is a static 3D model, but AFNet is a dynamic 2D network;\n(2) motivation: SlowFast is designed to capture semantic information and changing motion with branches at different temporal speed for better performance, while AFNet is aimed to dynamically skip frames to save computation and the design of two-branch structure is to prevent the information loss;\n(3) specific design: AFNet is designed to downsample features for efficiency at ample branch while SlowFast processes features in the original resolution;\n(4) temporal modeling: SlowFast applies 3D convolutions for temporal modeling, AFNet is a 2D model which is enforced with implicit temporal modeling by the designed navigation module. \nWe will add this part in our final version.\n\n**Question 1: Information loss leads to inferior performance.** \nAs we have explained in the reply to Weakness 1, information loss is used to describe the preprocessing phase of dynamic networks, instead of the sampling procedure for videos. From the table in that reply, we can see that existing dynamic works overlook the problem of information loss which leads to lower accuracy compared with TSN when sampling the same number of frames. Therefore, it motivates us to design the two-branch structure to avoid the information loss and achieve a better trade-off between accuracy and efficiency.\n\n**Q2: Fusion strategy.** \nWe admit that there is an unclear description of this fusion strategy. Indeed, the learnable weight has been well explored in previous work [6],[7] and we tailored this design into our two-branch network for feature aggregation. \nHowever, we want to stress that the fusion strategy is not the main contribution of this paper. Besides, we have conducted experiments on ActivityNet to prove the effectiveness of this design:\n| Method | mAP | \n| :-----: | ------------ |\n| AFNet(RT=0.5) | 74.3% |\n| AFNet(RT=0.5)_wo_fusion | 73.5% |\n\nThe results can demonstrate that this design is non-trivial as it effectively balances the weights between the features from two branches.", " \n**Q2: Learning weight matrix.** \nIf we understand the Reviewer's comment correctly, we are suggested to learn a soft weight for each frame.\nWe conduct the experiment by removing gumbel softmax in our navigation module and modifying it to learn a soft temporal attention for the T frames in focal branch. The result is shown below.\n\n| Method | Top-1 Acc. |\n| :-----: | ------------ |\n| TSN | 18.6% | \n| AFNet(RT=1.00) | 19.2% |\n| AFNet(RT=0.50) | 26.8% |\n| AFNet(RT=0.25) | 27.7% |\n| AFNet(soft_weight) | 27.0% |\n\nWe can make several conclusions from the table:\n(1)\tthe learned weights significantly improve the performance of AFNet(RT=1.00) as it did not build any temporal modeling module like TSN, and the gain in performance can again demonstrate the effectiveness of our navigation module.\n(2)\tAFNet(soft_weight) has a similar performance to AFNet(RT=0.25) and AFNet(RT=0.5) which meets our expectations as we have analyzed in Section 3.2.\nThough the navigation module just learns a binary mask for each frame, it will decide whether the coefficient will be calculated for each frame at every convolutional block which results in learned temporal weights in each video. Learning a soft weight cause the same effect. \nThanks for this great suggestion, we will add this part in our revision as it can help to better explain our implicit temporal modeling.\n\n[1] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6202-6211. \n[2] Jang E, Gu S, Poole B. Categorical reparameterization with gumbel-softmax[J]. arXiv preprint arXiv:1611.01144, 2016. \n[3] Wang L, Xiong Y, Wang Z, et al. Temporal segment networks: Towards good practices for deep action recognition[C]//European conference on computer vision. Springer, Cham, 2016: 20-36. \n[4] Meng Y, Lin C C, Panda R, et al. Ar-net: Adaptive frame resolution for efficient action recognition[C]//European Conference on Computer Vision. Springer, Cham, 2020: 86-104. \n[5] Sun X, Panda R, Chen C F R, et al. Dynamic network quantization for efficient video inference[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 7375-7385. \n[6] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.", " We appreciate the Reviewer’s approval and constructive suggestions for us to improve our work. We make the response as below.\n\n**Weakness 1: Relation to SlowFast and Dynamic Selection.** \n***SlowFast***: Good point. Yet, the differences are: (1) network category: SlowFast[1] is a static 3D model, but AFNet is a dynamic 2D network;\n(2) motivation: SlowFast is designed to capture semantic information and changing motion with branches at different temporal speed for better performance, while AFNet is aimed to dynamically skip frames to save computation and the design of two-branch structure is to prevent the information loss;\n(3) specific design: AFNet is designed to downsample features for efficiency at ample branch while SlowFast processes features in the original resolution;\n(4) temporal modeling: SlowFast applies 3D convolutions for temporal modeling, AFNet is a 2D model which is enforced with implicit temporal modeling by the designed navigation module.\n\n***Dynamic Selection***: We admit that Gumbel softmax optimization[2] has been well explored in previous works. However, employing Gumbel softmax optimization at intermediate features for frame selection has been seldom touched in this area.\nThe design of combining navigation module with the two-branch structure is nontrivial as we demonstrated it enforces implicit temporal modeling in Section 3.2 and the experiments in Table 1. Concretely, AFNet outperforms TSN[3] by 9.1% on Something-Something V1 with the help of implicit temporal modeling.\n\nWe really appreciate it that the Reviewer recognizes our contributions and we will add this discussion in our final version.\n\n**Weakness2: Hard-sampling methods.** \nThanks for the great suggestion.\nWe leave the perturbed maximum method in our future works due to limited time during rebuttal.\nHowever, we would like to emphasize that one advantage of AFNet over many other dynamic networks is that we can train the network in an end-to-end fashion without extending the training epochs.\nHowever, previous works like AR-Net[4], VideoIQ[5] have to split the training into multiple stages as they introduce an extra policy network to make decisions.\n\n**Weakness3: Stronger backbones and more frames.** \nFirst, we conduct experiments on Somethin-Something V1 for backbone based on ResNet-101[6].\n| Method | Frame | Top-1 Acc. | GFLOPs |\n| :-----: | ------------ | :-----: | :-----: |\n| TSM_101 | 8 | 47.2% | 62.8G |\n| AFNet-TSM_101(RT=0.4) | 8 | 47.2% | 28.0G |\n| TSM_101 | 12 | 49.1% | 94.2G |\n| AFNet-TSM_101(RT=0.4) | 12 | 49.8% | 42.1G | \n\nThe results show that our method generalizes well to stronger backbones. Then we conduct experiments on ActivityNet with more sampled frames:\n| Method | Frame | mAP | GFLOPs |\n| :-----: | ------------ | :-----: | :-----: |\n| TSN | 16 | 76.9% | 65.6G |\n| AFNet(RS=0.4,RT=0.8) | 16 | 76.6% | 32.9G |\n| TSN | 32 | 78.0% | 131.2G |\n| AFNet(RS=0.4,RT=0.8) | 32 | 77.6% | 60.9G |\n\nWhen sampling more frames, AFNet costs significantly less computations compared to baseline method with only a slight drop in performance which can be attributed to two-branch structure as it avoids the information loss (more analysis can be found in our reply to weakness 1 pointed out by Reviewer ChDo). Thanks for the advice and we will add this part in our revision.\n\n**Question 1: FLOPs counterintuitive.** \n***GFLOPs of RT=0.8 is only 64.6% of TSM:*** Thanks for carefully reading our paper.\nThe reason is due to the two-branch structure: (1) the ample branch is designed to squeeze the spatial and channel size by a factor of two which makes this branch very lightweight. In this way, we can prevent the loss of information by keeping all the frames at ample branch but with minimal costs.\n(2) we only compute the salient frames at the focal branch and the group number of convolutions at this branch is set to two to further reduce the cost (illustrated in lines 153,154).\n(3) our navigation module is designed to be very lightweight as the computational cost is only 19M in this RT=0.8 model. Based on these reasons above, the GFLOPs of AFNet is very small. \n\n***GFLOPs of RT=0.8 is slight larger than RT=0.4:*** This is because we only conduct frame selection at focal branch and the saved computations can only be obtained through this branch which results in a non-linear deduction in GFLOPs. Though, AFNet still achieves a smaller cost compared to other methods owing to the design of our two-branch structure and dynamic selection of navigation module.", " This paper proposes a two-branch network for efficient video recognition. In particular, the Ample Branch takes densely sampled input frames and processes them with reduced channel sizes. On the other hand, the Focal Branch only processes salient frames selected by a navigation module. Extensive experiments on multiple video benchmarks show that the proposed AFNet achieves state-of-the-art results with lower computational cost. Strength\n1. The proposed architecture, AFNet, is simple yet effective for efficient video action recognition. As shown in the experiment section, AFNet achieves even better results than its baseline with lower computational cost. The idea of leveraging more input frames for avoiding information loss and salient frame selection is well-motivated, and the encouraging results provided in this paper are potentially insightful for the research community.\n2. The paper is overall well organized and well written.\n\nWeakness\n1. The novelty of individual components of AFNet is limited. For example, the two-branch design with downsampled \"fast branch\" is explored in SlowFast network (without dynamic selection of salient frames though); the navigation module along with the Gumbel softmax optimization technique is also used in prior work for dynamic selection [1]. However, I believe that the contribution of the overall architecture design is sufficient and the proposed method achieves good results.\n\n2. Since Gumbel softmax is used for dynamic sampling of frames, the computational cost cannot be reduced during training. Although we usually care more about the computational cost of a model at inference, the cost of training will become a bottleneck if (1) the model is too large to fit into GPU memory (e.g., using more frames than 12 frames used in the paper); (2) the model training takes too much time. I understand it's out of the scope of this paper, but I'd suggest the author to try other \"hard-sampling\" algorithms for the selection module, for example, the perturbed maximum method [2, 3].\n\n3. Because the proposed two-branch design is generic to different backbone models, it's always better to see more experimental results with stronger backbones (e.g., ResNet-101 or even Transformers) and using more frames (12 frames are still a small number for long videos such as those in ActivityNet).\n\n\n[1] Rao, Y., Zhao, W., Liu, B., Lu, J., Zhou, J., Hsieh, C.J.: Dynamicvit: Efficient vision transformers with dynamic token sparsification. In: NIPS (2021)\n[2] Berthet, Quentin, et al. \"Learning with differentiable pertubed optimizers.\" Advances in neural information processing systems 33 (2020): 9508-9519.\n[3] Cordonnier, Jean-Baptiste, et al. \"Differentiable patch selection for image recognition.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. 1. The GFLOPs of RT=0.8 is a bit counterintuitive. Considering (1) the introduction of the addition Amber Branch, navigation module and (2) the last stage of the network remains unchanged, the GFLOPs of the model with RT=0.8 should be larger than 80% of that of the backbone model. This trend can be observed for the setting RT=0.4. However, for RT=0.8, GFLOPs = 31.7 is only 64.6% of TSM (GFLOPs=49.1), which is just slightly larger than RT=0.4 (GFLOPs=27.8). Please clarify this in the rebuttal.\n\n2. For the navigation module, instead of learning a sampling strategy, what if we learn a linear function that transforms the original T frames to T' steps? In other words, we aim to learn a T x T' weight matrix that compute T' weighted averages of the T frames. It's differentiable and easy to optimized with the rest of the model, and I'm curious whether it'll give worse results than doing frame sampling. Yes.", " This paper mainly targets action recognition. The authors advocate the importance of utilizing as many frames from each video as possible, while preserving proper computational cost. To this end they propose a novel architecture with two branches, one of which handles the whole video snippet with lower resolution and samples informative frames for fine-grained inference in another branch. Strength:\n1. The idea of end-to-end frame selection is interesting.\n2. The authors provide abundant experiment to verify the effectiveness of their method.\n\n\nWeakness:\n1. The description is not clear enough. For example, the authors mention that the motivation of their method is to avoid ‘the loss of information compared to other dynamic methods’. However, it is not shown in this paper what kind of information loss exists in previous methods, and how such loss does harm to the performance. It would be better if more pilot study can be provided.\n2. The proposed two-branch structure with different temporal scale is similar to SlowFast [1], which is one of the most famous action recognition models and also builds lateral connection between a slow branch and a fast branch with difference frame rates. The authors should consider discussing about this paper when introducing the proposed method.\n\n[1] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6202-6211. 1. In fact the claim that information loss leads to inferior performance is kind of counterfactual. Many previous studies have shown that for simple actions even one frame is enough for inference. On the other hand, for those complicated actions, temporal redundancy still exists. If the authors want to show the merit of utilizing all frames, it may be a good choice to directly train a model with all frames regardless of computational cost to show if it can significantly outperform the existing methods.\n2. The authors mention that outputs from two branches are merged using ‘specially designed fusion strategy’ (L120). However, the fusion strategy as in Eq. 8 is simply a weighted average with learnable weight. I am afraid that the authors overclaim their contribution on this module.\n3. The authors utilize residual connection between layers in the focal branch. This is questionable since different layers inquires potentially different frames due to the designed navigation module, which means features from different layers are not aligned in temporal dimension. I wonder if it is proper to directly add these features together.\n4. The authors claim that the proposed method do not build temporal modeling module. I am not sure whether the $W_2$ in Eq. 2 plays such a role to interact among frames.\n Several points of limitations are mentioned, which is comprehensive and provides promising future works for the current method.", " This paper presents an efficient framework called Ample and Focal Network (AFNet) for video recognition. This framework consists of two branches: the ample branch preserves all input features by lightweight computation; the focal branch extracts features only from selected frames to save cost. By fusing the features of these two branches, AFNet can keep its focus on the crucial information while requiring less computation. Experiments on five datasets demonstrate the superiority of AFNet compared to state-of-the-art methods. However, I have some concerns about this paper. My detailed comments are as follows. Strengths:\n1. This paper presents an efficient framework called Ample and Focal Network for video recognition, which uses more frames but reduces computation.\n2. The authors use a two-branch framework, in which two branches are complementary to each other, to prevent information loss when selecting fewer frames.\n3. The authors propose a navigation module that can select informative frames to save computational cost and is compatible with spatial adaptive works.\n\nWeaknesses:\n1. My biggest concern lies in the technical contribution of this paper. This method uses two streams, one for dealing with frames with high spatial resolution but low temporal resolution. The other process frames with low spatial resolution and high temporal resolution. Such an idea seems similar compared with the Slow-Fast [a] network.\n2. As for the adaptive frame selection, Scsampler [b] also uses a tiny network to select frames and a larger network to do action recognition. I suggest adding more discussions between them.\n3. In the experiments, the authors use only 12 frames (compared with 8-frame settings in previous methods), which is not convincing enough to verify the efficiency of the proposed methos.\n\n[a] SlowFast Networks for Video Recognition. ICCV 2019.\n[b] Compressing Videos to One Clip With Single-Step Sampling. CVPR 2022. 1. The proposed method uses several navigation modules in a recurrent manner. What did the model learn to select in different stages?\n2. In section 3.1, the logit $p_n^t$ for frame t is generated with Eq.(3). Then for $p_n={p_n^t}_{t=1}^T$, all $p_n^t$ are with the same values as they are generated with the same convolution weights $W_2$ and feature $\\tilde{v}_{y_n^a}$. More discussions are required.\n3. The second term in Eq.(16) is used to contain the ratio of the selected frame with the square of the difference between $r$ and $RT$. However, This introduces an additional hyper-parameter $RT$ and restricts the model from selecting fewer frames for lower computation cost.\n4. InTab.1, AFNet achieves higher accuracy with fewer frames, which is counterintuitive. The explanation given by the authors is only for why AFNet achieves higher accuracy than TSN, not for why the fewer frames are selected, the higher accuracy AFNet achieves. More clear explanations are required.\n5. Why is AFNet not compared to other efficient frameworks like MoViNet[3]? More explanations are needed.\n6. Some action recognition methods are missing, such as Two-Stream Network [1] and T-C3D [2].\n\n[1]\t“Convolutional Two-Stream Network Fusion for Video Action Recognition.” CVPR (2016)\n[2]\t“T-C3D: Temporal Convolutional 3D Network for Real-Time Action Recognition.” AAAI (2018).\n[3]\t\"Movinets: Mobile video networks for efficient video recognition.\" CVPR (2021)\n\n The authors adequately addressed the limitations and potential negative societal impact of their work.", " This paper proposes Ample and Focal Network (AFNet) for video recognition. Specifically, the network has an ample and a focal branch. The ample branch operates on a set of neighboring (with strides) frames with reduce-sized feature maps (in height, width and channels), whereas the focal networks then takes in both input frames and intermediate predictions from focal network to deceptively process selected frames, with higher computation budget. The resulting network is claimed to have a better accuracy-computation trade-off than previous work. The authors conducted experiments on Something-Something v1/v2, Mini-Kinetics, Jester and ActivityNet and demonstrated that their method yields better accuracy compared to several baselines (e.g. TSM, AdaFuse-TSM, bLVNet etc) on these datasets. \n Strengths\n+ The two-branch design makes intuitive sense, with one focus on lightweight processing on dense inputs, while the other process sparsely selected inputs with heavier computation.\n+ The paper is comprehensive in structure --- in addition to the qualitative description of the approach, the authors also included a section on the theoretical analysis of implicit temporal modeling. \n+ The result section includes comparison to several baseline methods across several datasets. It seems the proposed method has an edge on accuracy-computation trade-off across these comparisons. There are also some qualitative visualizations and ablations to dissect the approach. \n\nWeaknesses\n- As shown in Figure 2, the frame selection process is implemented as sparse convolutions, for which from the texts I cannot tell how efficient they are. This becomes more of an issue since in all tables the authors report FLOPs rather than actual inference latency. \n- Across Sth-Sth (Table 2), Mini-Kinetics (Table 3), and ActivityNet (Table 4), the accuracy/computation gains over the competitor is definitely not to a level that I'll consider significant. \n- There is no promise on code release, which might make it hard to reproduce the reported results. \n- For related work, the proposed approach missed an important citation: SlowFast Networks from Feichtenhofer et al. As it also sits on this two-branch idea for video recognition where one lightweight branch focuses on motion and another heavy branch focuses on semantics. This should be added and discussed. Another related paper is \"End-to-end Learning of Action Detection from Frame Glimpses in Videos\" from Yeung et al., as it first proposed to selective focus on a subset of frames for video recognition. \n\nWriting/Typos\n- L58, \"but strength the representation\" --> \"but strengthen the representation\"\n- L125, C_o, H_o and W_o are not introduced anywhere up until this point. \n- L273, \"to analysis the results\" --> \"to analyze the results\"\n- In L136, t in p^t_n is used to index frames, whereas in Eq 4, the superscript is overloaded to denote the frame selection flag. This will cause some confusions. \n - L150, isn't L_n a continuous vector computed using Eq 6? Is there some thresholding used here before selecting non-zero values? \n- Eq 1, is it necessary to keep the v notation on the left hand side of the equation? Since it represents input video, keeping it here might cause some confusion. \n- L145, \"we let tau decrease from 1 to 0.01 during train\", what's the decay schedule used? Do different schedulings make a difference? \n- Table 1, the improvement from TSN to AFNet is quite significant, and honestly a bit surprising. Does the author investigate the possibility of overfitting for the TSN results? \n Yes", " The paper proposed to reduce the cost and boost the accuracy of CNN models applied to video action recognition via a two-stream approach. The first, \"ample\" stream processes all of the frames, but cheaply, by using low spatial resolution and number of channels. The second \"focal\" stream processes frames at high-resolution, but only processes a few of the input frames. A navigation model uses the input from the ample stream to select which frames the focal stream should process, using Gumbel softmax.\n\nThe paper presents results on the ActivityNet, Something-Something, and Mini-Kinetics datasets, demonstrating strong accuracies at competitively-low flop counts.\n\nAdditional studies explore allowing the navigation module to make spatial, as well temporal, selections, and demonstrate that the learned navigation model achieves superior performance to other simpler sampling strategies.\n **Originality** The main weakness is that the premise has already been explored thoroughly in other publications. Specifically in \"SlowFast Networks for Video Recognition\" https://arxiv.org/abs/1812.03982 (ICCV 2019, 1300+ citations). Both use the idea of a high frame rate low spatial resolution / channels stream, combined with a low frame rate high spatial resolution, with lateral connections from the fast pathway to the slow pathway. Both are targeted at limiting flops while boosting accuracy. \n\nMore recently in \"Multiview Transformers for Video Recognition\" (CVPR 2022) which extends the slow-fast premise to 2+ streams using transformer backbones, instead of CNNs.\n\nThe paper and accompanying analysis is incomplete without a direct comparison to SlowFast. \n\nThe primary difference is the adaptive selection of frames by the navigation module, vs. SlowFast's fixed temporal sampling rates plus lateral connections. Given that, at a quick glance the numbers seem comparable between the two(*), additional comparison between the approaches is warranted.\n\n**Clarity:** The paper is clearly written and generally easy to follow. Section 3.2 is a minor exception: I was eager to see the implicit temporal modeling, but found this explanation hard to follow. Perhaps it would benefit from more prose and fewer equations?\n\nLine 293: Please elaborate on how to \"sample frames from a gaussian distribution\", which is not obvious since Gaussian variables are continuous, not discrete.\n\n**Quality:** Generally good, except for the omission of highly-cited related work.\n\n**Significance:** As mentioned, this paper's significance is diminished by its limited originality.\n\n\n===\n\n(*) The original SlowFast paper publishes numbers on different datasets than this work, but from what I can piece together, SlowFast seems better or comparable. If it's fair to compare numbers on MiniKinetics and Kinetics-400, SlowFast gets 74.2% top-1 at 28.6 GFLOPs (see SlowFast Table 2 (b)), while this paper reports 73.5% top-1 at 22.0 GFLOPs. On Something-Something v2, SlowFast-ResNet50 seems to get 61.7 (see \"Multiview Transformers...\" Table 2 (c)) which is comparable to this paper's 61.3/62.5.\n\nI recognize the limitations of this analysis, and would love to see a proper apples-to-apples comparison.\n Why is TSN used as a baseline in Table 1? Although it's a great paper, the method is 6 years old.\n yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 5, 4 ]
[ "dUbJwatsor", "DeMn7YGAl85", "w4uRoZaLRHb", "yiTSSBEOtH-", "BaOjV3hSrh0", "w4uRoZaLRHb", "x5Q0wcuusc", "nips_2022_owZdBnUiw2", "x5Q0wcuusc", "flU5hgQuEbu", "Ulel-9INRLJ", "w4uRoZaLRHb", "yiTSSBEOtH-", "BaOjV3hSrh0", "cK33T7udapT", "yiTSSBEOtH-", "yiTSSBEOtH-", "BaOjV3hSrh0", "BaOjV3hSrh0", "w4uRoZaLRHb", "w4uRoZaLRHb", "w4uRoZaLRHb", "cK33T7udapT", "cK33T7udapT", "x5Q0wcuusc", "x5Q0wcuusc", "nips_2022_owZdBnUiw2", "nips_2022_owZdBnUiw2", "nips_2022_owZdBnUiw2", "nips_2022_owZdBnUiw2", "nips_2022_owZdBnUiw2" ]
nips_2022_evRyKOjOx20
Optimistic Mirror Descent Either Converges to Nash or to Strong Coarse Correlated Equilibria in Bimatrix Games
We show that, for any sufficiently small fixed $\epsilon > 0$, when both players in a general-sum two-player (bimatrix) game employ optimistic mirror descent (OMD) with smooth regularization, learning rate $\eta = O(\epsilon^2)$ and $T = \Omega(poly(1/\epsilon))$ repetitions, either the dynamics reach an $\epsilon$-approximate Nash equilibrium (NE), or the average correlated distribution of play is an $\Omega(poly(\epsilon))$-strong coarse correlated equilibrium (CCE): any possible unilateral deviation does not only leave the player worse, but will decrease its utility by $\Omega(poly(\epsilon))$. As an immediate consequence, when the iterates of OMD are bounded away from being Nash equilibria in a bimatrix game, we guarantee convergence to an \emph{exact} CCE after only $O(1)$ iterations. Our results reveal that uncoupled no-regret learning algorithms can converge to CCE in general-sum games remarkably faster than to NE in, for example, zero-sum games. To establish this, we show that when OMD does not reach arbitrarily close to a NE, the (cumulative) regret of both players is not only negative, but decays linearly with time. Given that regret is the canonical measure of performance in online learning, our results suggest that cycling behavior of no-regret learning algorithms in games can be justified in terms of efficiency.
Accept
This paper proves a new phenomenon about the Optimistic Mirror Descent (OMD) algorithm in two-player general-sum matrix games (bimatrix games): The iterates either converge to an approximate Nash Equilibrium (NE), or converge to a Strong Coarse Correlated Equilibrium (CCE). This result links and improves over two existing understandings: (1) Convergence to NE is unlikely to be generally achievable by any efficient algorithm due to its PPAD-hardness; (2) Convergence to approximate CCE is achievable by any no-regret algorithm, but it is unclear whether such CCE is a strong CCE (in the present paper’s sense). Given the phenomenon is not only new but also fundamental and important to the game theory community, I believe it is worth the attention of NeurIPS audience, and thus recommend acceptance.
train
[ "zbha40zZJKE", "q5A2XRyDsvX", "1tkByKJtv4T", "yrsSr-OG_fS5", "B1w9F0nWzjW", "z0ZaFv9FfpO", "mgyudeuJ8Tq", "MQNquEsr47C", "55Bf-MuKmqr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the response and apologize for the delay in the discussions. I think the authors' remarks about the learning rates mostly address my concerns there---In this paper $\\eta=O(\\epsilon^2)$ with $\\epsilon=\\Theta(1)$ still has interesting implications, unlike in standard no-regret bounds where an average regret of $\\Theta(1)$ is not that hard to achieve. On the other hand, the authors have thought about large learning rate that does not depend on $\\epsilon$ (like other Optimistic Hedge papers e.g. Daskalakis et al. 2021) and concluded it currently seems a hard question. \n\nI also liked the new observation pointed out in the response to Reviewer Yw3X, that the theorem can be strengthened by weakening the precondition to \"not visiting approximate NE for only a $\\delta$-fraction of the iterates\" instead of all iterates (with result degrading when $\\delta$ gets small). Therefore, while I am still a bit concerned by the restrictiveness of the assumptions overall, I am now more optimistic about the flexibilities / potentials to be extended and thus have increased my score.\n", " We thank again the reviewer for the helpful feedback. Given that the discussion period is soon coming at an end, please let us know if we have adequately addressed the concerns raised, and if the reviewer has any further questions.", " We thank again the reviewer for the helpful feedback. Given that the discussion period is soon coming at an end, please let us know if we have adequately addressed the concerns raised, and if the reviewer has any further questions.", " We thank the reviewer for the helpful feedback. Below we address the main concerns.\n\n--- *“The result of visiting approximate Nash Equilibrium is less exciting and provides little information.”*\n\nComputing approximate Nash equilibria in bimatrix games is **one of the most fundamental problems in algorithmic game theory** with a tremendous amount of interest (please see our overview in Appendix A). Despite intense efforts, the best polynomial-time approximation is $\\approx 1/3$, and for even smaller constant approximations the problem is known to require superpolynomial time (Rubinstein (2016)). So, even reaching an $\\epsilon$-approximate Nash equilibrium, with a small constant $\\epsilon$, is clearly remarkable, especially as it derives from efficient uncoupled no-regret learning dynamics, which are also scalable in very large games.\n\n--- *“The paper will be much stronger if the results in Theorem 1 can implement last-round convergence to NE (not visiting) or strong CCE. A relevant result in zero-sum games about $\\epsilon$-NE leading to last round convergence can be found in Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization.”*\n\nThere is a direct way of extending Theorem 1 so that **most of the strategies** (say a $1 - \\delta$ fraction of them) are either $\\epsilon$-approximate Nash equilibria, or we obtain a strong CCE, **for any constant** $\\delta > 0$. This statement is significantly stronger and provides much more information than the one we included in our first version as it applies to, e.g., a $0.99 = 1 - \\delta$ fraction of the strategies---instead of a *single* strategy. Hopefully it will address the reviewer’s concern. \n\nIn proof, the difference is that if a $\\delta$ fraction of the iterates are far from being approximate Nash equilibria, then the sum of the players' second-order path lengths is now $\\Omega(\\epsilon^2 \\eta^2 T \\times \\delta)$; that is, it still grows linearly in $T$, but is now multiplied by a factor $\\delta$---recall our proof-sketch for Theorem 3.4. The rest of the proof is a matter of direct calculations. We will make sure to formally establish this in our revised version. \n\nWe also point out that it is possible to just check the NE-gap---the maximum of the best response gaps---at every iteration, and effectively terminate the dynamics after a sufficient accuracy has been reached. This preserves all of the interesting algorithmic implications of our main result. \n\nRegarding comparison with the paper “Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization,” that paper only considers **zero-sum games**. General-sum games behave completely differently than zero-sum games, so one should not compare our result in general-sum games with a guarantee that only applies to zero-sum games.\n\n--- *“I am wondering whether the existence of a strong CCE is guaranteed in every bimatrix game?”*\n\nThat’s a very interesting question. $\\epsilon$-strong CCE with $\\epsilon > 0$ are **not** guaranteed to exist in general bimatrix games (but of course 0-CCEs always exist). In particular, they do not exist for zero-sum games, or more broadly for *strategically zero-sum games*—games that “behave” like zero-sum. More precisely, in zero-sum games all CCE are 0-strong; this follows directly from the well-known collapse of CCE to NE in zero-sum games. It is a plausible conjecture that strong CCE exist if and only if the game is not strategically zero-sum, although we have not pursued this direction.\n\n--- *“In experiments, do the authors observe chaos behaviour of OMD? (e.g., the dynamic keeps approaching and moving away from a NE repeatedly)?”*\n\nIn some of our experiments OMD exhibits recurrent/cycling behavior (e.g., see our Goofspiel plot); this seems to be aligned with the findings of Piliouras and Cheung (2020) about the chaotic behavior of OMD in general-sum games. But, to answer the reviewer’s question, we did not observe the dynamics periodically approaching and moving away from NE. ", " We thank the reviewer for the helpful feedback. Below we address the main questions.\n\n--- *“The paper could have had more exposition regarding the setting where players do not have access to the full representation of the bimatrix game but rather infer knowledge of the game through oracles”*\n\nThere could be a misunderstanding here: we do **not** assume that the players have a representation of the bimatrix game. Instead, the players initially have no knowledge about the game whatsoever, and they gradually elicit information about their utilities via utility oracles (full-information and noiseless). In particular, we operate within the standard uncoupled no-regret learning setting.\n\n--- *“Minor point: In the appendix you mention that the state of the art for approximate $epsilon$-Nash being the Tsaknakis and Spirakis $0.3393 + \\delta$ result. Recently there has been a paper Deligkas et al.[...]”*\n\nWe thank the reviewer for pointing this out; we were informed about that improvement just after the NeurIPS submission. We will include it in the revised version.\n\n--- *“In the experimental section you mention the possibility of different initializations for OGD. Are there games where different intializations to the same game can give rise to either of the guarantees of your result (eps-Nash or eps-strong CCE)?”*\n\nThe answer is yes, at least in this sense: Nash equilibria are fixed points for OMD, so initializing at a Nash equilibrium will ensure that OMD will stay (and hence converge to) that Nash equilibrium. On the other hand, in our example in Section 4.1 we observed that the Nash equilibrium appears to be “repelling”---the plots that we illustrate also apply under random initializations. In other words, in that example it seems that we get strong CCE “for almost all initializations”---unless we start from the Nash equilibrium. But the reviewer’s point raises the very interesting question of whether either of the guarantees of our main theorem can arise under a non-trivial (e.g. nonzero) measure of initializations. We suspect that the answer is yes, but we have no concrete examples as of yet.\n\n--- *“Supposing that players do not have access to the representation of the game and infer utilities from potentially noisy oracles, are results robust to such a setting?”*\n\nExtending our results under noisy oracles is indeed a very interesting question—although we again note that we do not assume that players have access to the representation of the game. The answer to this question largely depends on the model of the noisy oracles. It is a plausible conjecture that under natural assumptions similar results can be derived, but we have not pursued this direction. \n\nOne notable comment here is that extending our results to the bandit setting—where the player does not observe the entire utility vector, but only the utility that corresponds to the player’s action at that round—seems challenging. In particular, even obtaining a second-order path length bound for the regret (which is weaker property than the RVU bound) is a notorious open question in the literature on bandit learning.\n", " We thank the reviewer for the helpful comments. Below we address the main concerns.\n\n--- *“I am a bit bothered by the small learning rate [...]”*\n\nOur theorem allows one to pick $\\epsilon$ to be a **universal constant** (for example, 0.1)---in which case $\\eta = \\Theta(1)$---while maintaining all of its interesting implications. Indeed, note that the best known polynomial-time algorithm only gives a $1/3$-approximate Nash equilibrium in bimatrix games; using any constant $\\epsilon > 0$ smaller than $1/3$ would imply the best known polynomial-time algorithm for Nash equilibria—one of the most fundamental problems in algorithmic game theory. We will clarify this point further in the paper.\n\nWe also remark that it is **not** typical to use $\\eta = \\Theta(1)$ in general-sum games; we are not aware of such guarantees (unlike in zero-sum games, where it is indeed standard to pick a constant $\\eta = \\Theta(1)$).\n\n--- *“Or alternatively, infinitesimally small learning rate like in gradient flow?”*\n\nAgain, all of the interesting implication of our result hold even for constant learning rate $\\eta = \\Theta(1)$. General-sum games are very different from, e.g., zero-sum games, where one wants to select $\\epsilon$ to be infinitesimally small in order to reach arbitrarily close to a Nash equilibrium. Our setting is fundamentally different for the reasons we described above—we do **not** require an infinitesimally small learning rate.\n\n--- *“I understand it is necessary in the current proof, but I wonder have the authors thought about larger learning rates [...]”*\n\nNotwithstanding the above remarks, this is an interesting question. We did pursue this direction extensively, but we now believe that it might be hard to do so—use a learning rate that does not depend on $\\epsilon$---especially using uncoupled methods (as we do). It is plausible that allowing players to use different learning rates would be helpful, but we did not pursue that direction since it would break the symmetry in how the players update their strategies.\n\n--- *“In numerical experiments, [...] but less standard.”*\n\nWhile our analysis does not cover algorithms such as optimistic multiplicative weights, it does go well-beyond Euclidean L2 regularization. We also stress that optimistic projected gradient descent (OMD with Euclidean regularization) is an extremely well-studied algorithm in recent years; we will make sure to stress this point further in our revised version.\n\n--- *“Section 4.1 needs a bit more polishing. [...]\"* \n\nWe will make sure to further polish Section 4.1.\n\n--- *“Figure 2 is also a bit confusing [...]“*\n\nThe x-axis of Figure 2 shows the **utility** of player X, while the y-axis the **utility** of player Y (please see the labels in the plot). We will make sure to also specify that in the caption.\n\n--- *“Some of the numerical details [...]”*\n\nWe will transfer many of the numerical details in the appendix.\n\n--- *“One caveat is the perhaps restrictive set of assumptions: two-player games”*\n\nTwo-player general-sum games are one of the most fundamental classes of games (please see Appendix A for an overview). Further, while our main result cannot be extended in multiplayer games (Remark 3.7), we still believe that there are interesting classes of games for which it can be extended.\n\n--- *“How is Lemma 3.3 different from standard RVU bound [...]*\n\nThe RVU bound is a statement about the regret of each player, while Lemma 3.3 does not involve at all the regrets of either player. Also, notice that the left-hand side of Lemma 3.3 does not involve a squared norm—unlike the RVU bound. So the RVU bound is quite different from Lemma 3.3. Their proofs also seem quite different to us, apart from minor similarities in some steps of the proof. Perhaps the reviewer could elaborate more on this point. \n\nRegarding intuition about Lemma 3.3, that result is a crucial piece in the proof of our main result, as it connects the path lengths between the two players. Notice that this particular step breaks in multiplayer games; there, it is possible that some of the players are completely “disconnected” from the other players, in which case there is no connection whatsoever between their path lengths.\n\n--- “Last sentence in abstract, [...]”\n\nEfficiency here is meant in terms of regret, which is a standard measure of performance in the literature on learning in games—this is alluded to in the previous sentence of the abstract, but we will make sure to clarify it in the revised version. In particular, when the dynamics do not reach Nash equilibria, the guarantee in terms of the regret is remarkably better than the $O(1)$ guarantee which is possible, for example, in zero-sum games. Naturally, such regret guarantees also translate to welfare guarantees, but unfortunately that only applies in smooth games (see Syrgkanis et al. (2015)). Establishing improved guarantees in terms of the social welfare in general games is an open question.", " This paper proves a new phenomenon about the Optimistic Mirror Descent (OMD) algorithm in two-player general-sum matrix games (bimatrix games): The iterates either converge to an approximate Nash Equilibrium (NE), or converge to a Strong Coarse Correlated Equilibrium (CCE). This result links and improves over two existing understandings: (1) Convergence to NE is unlikely to be generally achievable by any efficient algorithm due to its PPAD-hardness; (2) Convergence to approximate CCE is achievable by any no-regret algorithm, but it is unclear whether such CCE is a strong CCE (in the present paper’s sense). Strengths:\n\nThe main message of this paper is conceptually quite interesting and new. Most existing works consider learning NE in two-player zero-sum games and CCE in multi-player general-sum games as parallel goals, as they can both be achieved by no-regret learning. On the other hand, learning NE in general-sum games is likely computationally challenging due to its PPAD hardness, and thus very sparsely studied in the context of no-regret learning (which mostly consider computationally efficient algorithms). \n\nThis paper essentially proves that, for the OMD algorithm and *two-player* general-sum games, if NE is not achieved, then convergence to CCE is stronger than standard bounds—you actually converge to $O(-\\epsilon)$-CCE (what’s called the “strong CCE” in the paper) rather than $O(\\epsilon)$-CCE. At a high level this seems to say something about the “optimization landscape” of bimatrix games, like it is perhaps more benign than the worst-case hardness / convergence results suggests. \n\nTechnically, it appears that the result follows by playing with the fundamental RVU property (and its proof) of OMD algorithms and making several smart observations. In particular, the proof (for the Reg_X part) follows by the observation that if NE is not achieved, then OMD iterates must move substantially (Proposition 3.2), which makes the negative term in the RVU dominate the positive term if learning rate is small enough, hence a *negative* regret and convergence to strong CCE. This argument appears new and may be applied in future work. It also gives me an impression that the RVU property may have other fruitful consequences yet to be discovered, despite its already powerful consequences for large bodies of work in this area. \n\nIt is also good to see the numerical experiments, in particular the fact that convergence to both NE and strong CCE are possible in general-sum games (so that the “either-or” statement in the theorem is unlikely to be improved, at least for this algorithm).\n\n\nWeaknesses:\n\nThe result has a feeling of being slightly preliminary and not very complete. One caveat is the perhaps restrictive set of assumptions: global smoothness of regularizer, two-player games, and very small learning rate. In particular I am a bit bothered by the small learning rate, which is $O(\\epsilon^2)$ compared with the standard $\\Theta(1)$ learning rate for OMD type algorithms. This also makes convergence to NE/Strong CCE slower than standard $1/\\epsilon^2$ time. I understand it is necessary in the current proof, but I wonder have the authors thought about larger learning rates (as mentioned in Appendix B)? Or alternatively, infinitesimally small learning rate like in gradient flow?\n\nAlso, the assumption of global smoothness of the regularizer rules out the very standard example of entropic regularizer on the probability simplex (whose smoothness grows to $\\infty$ as we approach the boundary) for normal-form games, as mentioned in the open questions section in Appendix B. In numerical experiments, the authors consider L2 regularizers which are indeed globally smooth (thus satisfying the conditions of the theorem) but less standard. \n\nSection 4.1 needs a bit more polishing. Some important quantities are not well-defined (e.g. what is “incentive-compatibility parameter?) Apart from that, Figure 2 is also a bit confusing (e.g. what are the policies used to plot Figure 2? Since policies have much more degrees of freedom than 2). Some of the numerical details in the text also does not seem too important and may be moved to the Appendix. I do like Figure 3 which shows very clearly the negative regret and the improvement in social welfare. \n Please find some of my questions in the “weaknesses” part.\n\nAdditional questions:\n\nHow is Lemma 3.3 different from standard RVU bound (e.g. Proposition 2.1 & Corollary 3.1)? Skimming over the proof, it seems not a corollary of the RVU bound, but the proof steps are quite similar to that of RVU. Could the authors provide more intuition on this Lemma?\n\nLast sentence in abstract, “our results suggest that cycling behavior of no-regret learning algorithms in games can be justified in terms of efficiency”--- is a bit unclear to me what it means? Does “efficiency” mean convergence to Strong CCE? Or better social welfare? Same goes with the statement on Line 54-55. Perhaps the authors could expand or modify these statements to clarify. \n /", " Precisely as per the title, the paper shows that optimistic mirror descent (OMD) either converges to epsilon-approximate Nash equilibria, or the average emprical distribution of correlated play converges to an approximate-\"strong-coarse correlated equilibrium (eps-SCC). Here an epsilon-strong-coarse correlated equilibrium is parameterized by epsilon, which dictates the extent to which players see a decrease in utility from unilaterally deviating from the correlated distribution of play (where deviations are such that a player uses a single fixed strategy rather than following the correlated distribution's signals). An interesting consequence of the result lies in the fact that for constant epsilon, an eps-SCC is in fact an exact coarse correlated equilibrium, hence if the dynamic has not reached an eps-approximate, it must forcibly give rise to an exact coarse correlated equilibrium. Finally, the authors provide an in-depth analysis/visualization of the performance of OMW on a specific 3x3 game instance, as well as general performance on several game benchmarks Strengths: \n-The paper is very clear and results are well-motivated. A large part of the clarity stems from the fact that results are simple to state with interesting consequences. Furthermore, the overall intuition behind main proofs is well-explained in spite of technique details being in the appendix. Furthermore, the results will be of interest to the general community at Neurips.\n\nWeaknesses:\n-The paper could have had more exposition regarding the setting where players do not have access to the full representation of the bimatrix game but rather infer knowledge of the game through oracles.\n-A minor point really, but it would be interesting to map the trajectory of play for the game instance in Section 4, especially for different intializations as mentioned at the end of 4.1. This would be especially useful to visualize for the main Lemmas of section 3.\n\nMinor point:\nIn the appendix you mention that the state of the art for approximate eps-Nash being the Tsaknakis and Spirakis 0.3393 + delta result. Recently there has been a paper Deligkas et al. (https://arxiv.org/pdf/2204.11525.pdf) which improves this to 1/3 + delta\n In the experimental section you mention the possibility of different initializations for OGD. Are there games where different intializations to the same game can give rise to either of the guarantees of your result (eps-Nash or eps-strong CCE)? \n\nSupposing that players do not have access to the representation of the game and infer utilities from potentially noisy oracles, are results robust to such a setting? \n The main limitation that came to mind was the applicability of methods to games with more than 2 players, but this was adequately adressed in the paper as a potential thread of future research. ", " This paper studied the dynamic behaviour of Optimistic Mirror Descent beyond two-player zero-sum games and consider two-player general sum games (bimatrix games). It shows that the dynamic either visits an $\\epsilon$-NE or achieve a strong coarse correlated equilibria. \nIn order to prove that, the authors observe the correlation in behaviour of two players 's strategies (Lemma 3.3 in Appendix) along with the existing regret bound of OMD (Proposition 2.1). I really like the result of strong CCE and negative regret bound for both players (theorem 3.4) as they are new and fundamentally better compared to previous results. However, the condition in which the paper needs to achieve these results is hard to interpret. That is, the dynamic needs not to visit an $\\epsilon$-Nash Equilibrium in the whole $T$ rounds. Hypothetically, OMD can have a chaotic behaviour while still visiting an $\\epsilon$-Nash Equilibrium. This type of chaotic behaviour is known for many no-regret algorithms. Thus, the result of visiting $\\epsilon$-Nash Equilibrium is less exciting and provides little information. Besides, visiting $\\epsilon$-Nash Equilibrium and last round convergence are two very different, thus Theorem 1 does not imply the existing last round convergence result of OMD in zero-sum games. \n\nThe paper will be much stronger if the results in Theorem 1 can implement last-round convergence to NE (not visiting) or strong CCE. A relevant result in zero-sum games about $\\epsilon$-NE leading to last round convergence can be found in \"Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization\". I have a few questions to better understand the paper.\n\n1. I am wondering whether the existence of a strong CCE is guaranteed in every bimatrix game?\n2. In experiments, do the authors observe chaos behaviour of OMD? (e.g., the dynamic keeps approaching and moving away from a NE repeatedly)? \n\n there is no negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "z0ZaFv9FfpO", "yrsSr-OG_fS5", "z0ZaFv9FfpO", "55Bf-MuKmqr", "MQNquEsr47C", "mgyudeuJ8Tq", "nips_2022_evRyKOjOx20", "nips_2022_evRyKOjOx20", "nips_2022_evRyKOjOx20" ]
nips_2022_kEPAmGivMD
Deterministic Langevin Monte Carlo with Normalizing Flows for Bayesian Inference
We propose a general purpose Bayesian inference algorithm for expensive likelihoods, replacing the stochastic term in the Langevin equation with a deterministic density gradient term. The particle density is evaluated from the current particle positions using a Normalizing Flow (NF), which is differentiable and has good generalization properties in high dimensions. We take advantage of NF preconditioning and NF based Metropolis-Hastings updates for a faster convergence. We show on various examples that the method is competitive against state of the art sampling methods.
Accept
This paper proses an inference method that combines gradient ascent and normalizing flows. The idea is that one could, in principle, simulate the deterministic Fokker-Planck equation, but this would require access to the density of the evolving approximating density, which is intractable. Thus, the paper proposes to maintain a set of particles and update a normalizing flow to approximate this density. The resulting procedure is deterministic, with an accuracy that depends on the number of particles and the power of the normalizing flow. Reviewers agreed this was an interesting approach and the experimental results are promising (albeit fairly low-dimensional). However, there were a few apparent weaknesses: Firstly, there was a lack of clarity about the theoretical guarantees. The authors state that this is all clear in the manuscript, but readers would undoubtedly benefit from a much more centralized/explicit description of what approximations are involved, and under what guarantees the method is claimed to work, which could be put in a single place. In addition, in trying to understand exactly what was done in the experiments, it is difficult to understand several of the details. Algorithm 1 is very helpful in this regard, but it would be beneficial to have a self-contained elaboration of all of the points (perhaps even in an appendix). Finally, the experimental results are all relatively low-dimensional. Often particle methods do not scale gracefully to higher dimensions. (I do not see this as a huge flaw because the method could be useful even if it does not scale, but reader would benefit from evidence on this point either way.) In the end, however, most of the above issues are issues of clarity and I am willing to trust the authors to fix these before final submission. The paper appears to present a novel idea and the community would benefit from seeing it and discussing it.
test
[ "OzXXsVkioUf", "Q_9eoGgDRL", "mBC_HqBFlHH", "CfyleVPIyTV", "EPmfgws6WKL", "aqR-iP9VvZv", "Evvy3TiGv-4", "l1sD92UFFL", "vPIoQV978EN", "KnlnpQk4OQA" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the further explanation of the comments. We want to emphasize that in our original response we were not trying to discredit the review, and it was not our intention to be defensive: we simply failed to understand the specific points the reviewer was raising. For example, it was not clear to us that when discussing stochastic versions the reviewer was referring to a different method (that of classical Langevin MC), rather than to our DLMC method. \n\nAs we currently understand it seems these are the main comments the reviewer has: \n\n1) the reviewer is pointing out the papers which combine stochastic Langevin MC with neural samplers: we thank the reviewer for these references which we will add to the revised version. These papers are similar in spirit to reference 14 (Hoffman etal) which does neural Hamiltonian MC. It is unclear to us why the reviewer believes these references 1-3 will outperform our method: we believe that HMC outperforms LMC in most settings and that neural HMC outperforms neural LMC in most settings. Since we compare against neural HMC, and show DLMC outperforms it (due to the slow training of neural HMC with VI), we believe that DLMC is competitive against SOTA. We'd be happy to performs further ablation tests against these additional methods. \n\nThe reason we do not emphasize more preconditioning or spend more time explaining it is that this is not a new idea, and that DLMC works also without it, i.e. it is not what distinguishes our paper from other work. The reviewer is suggesting we do spend more time and we will do so in the revisions. For example, Riemanian HMC of Girolami and Caldedhead is difficult to implement numerically because the Jacobian is position dependent. NFs have by construction simple Jacobians and thus can be viewed as a specific implementation of RHMC which can be simpler to implement. \n\n2) We are arguing that replacing the stochastic term with a deterministic term has advantages, despite the fact that there is clearly a trade-off. The advantage is that DLMC method is deterministic, i.e. it has no noise: noise slows the convergence. This is well established in the context of HMC vs LMC: LMC can be viewed as a single leapfrog step of HMC, followed by stochastic momentum resampling. Performing many leapfrog steps deterministically improves the convergence rate of HMC relative to a single step LMC. Recent work on generalized HMC makes a further step in this direction by making even the momentum resampling less stochastic (by partially preserving the direction, as pointed out by Horowitz, Neal etc). We go all the way by making it completely deterministic. The price we pay is that we have to estimate the current density with NFs. This is clearly a trade-off in that NFs are not perfect, but we show that the trade-off is beneficial in the examples where we tried it. \n\nWe are arguing that our method is new, in that NF+deterministic Langevin have not been proposed before, and that it shows a lot of promise on realistic examples. We are not arguing that it will become the best sampler on the market, it is too early to say this, but it is also not clear that it cannot become SOTA. Furthermore, we believe the motivation for our method is clear: deterministic is good, random walk is bad for sampling, so how can we do it as deterministic as possible? Our proposed method is well defined and the trade-offs are clearly specified, and it feels too negative to question the whole premise of what we do by asking why would one want to ever do this. We think deterministic >> stochastic is a good motivation and the empirical results support this. We will emphasize this point further in revisions. ", " Thanks to the authors for their response. This addresses my concerns, especially Q1 and Q5, which were the more major in my opinion. I'll update my rating to reflect.", " P.S some things have been better motivated in the rebutal, however unfortunately due to the above reasons I am not inclined to change my score for this current iteration of the manuscript.\n\n[1] Doucet, A., Grathwohl, W.S., Matthews, A.G.D.G. and Strathmann, H., 2022, March. Annealed Importance Sampling meets Score Matching. In ICLR Workshop on Deep Generative Models for Highly Structured Data.\n\n[2] Zhang, Q. and Chen, Y., 2021. Path Integral Sampler: a stochastic control approach for sampling. arXiv preprint arXiv:2111.15141.\n\n[3] Vargas, F., Ovsianas, A., Fernandes, D., Girolami, M., Lawrence, N. and Nüsken, N., 2021. Bayesian Learning via Neural Schr\\\" odinger-F\\\" ollmer Flows. arXiv preprint arXiv:2111.10510.", " > There is no stochastic noise in our procedure (it is deterministic), so we are unclear why the reviewer talks about stochastic noise, simulating Gaussian noise, stabilizing noise procedure, writes down Wiener noise term dW_t etc. None of this is relevant for what we do. \n\nThis is not a helpful response. There is relevance in my review, time was taken out to carefully read it and present it and the nature of this response somewhat undermines it. Nonetheless I will take more time out to clarify this particular point.\n\nYes the dynamics you work with are deterministic, however as you point out at the start of the paper there is a stochastic equivalent which has the same marginals from there you build the deterministic ODE with the FPK score as its vector field. My point here is that the preconditioning you do (which has an auxiliary variables flavour) follows a determinsitic ODE which can be interepreted via an SDE in the exact same form you convert from langevin dynamics to an ODE with a score (this is just going backwards). The careful analysis I aimed in motivating to the reviewer with the hope of a fruitful discussion (rather than being discarded and told my review is not relevant to what you do) was to interpret the preconditioning in the langevin dynamics setting.\n\nTo be more specific if I have a system of ODEs 1 for the preconditioner and one for the sampling dynamics this \"deterministic\" (initial condition is random() system admits an equivalent stochastic formulation in terms of langevin dynamics. Because the preconditioning felt poorly motivated in the initial draft of the paper I resorted to its stochastic interpretatoin (covnerted back to langevin dyanmics) to analyse the effect it had. I was indeed doubtful wheter p(z|y) was gaussian or not as the notation and setups in the text became somewhat contradictory, thank you for clarifying. \n\n> On preconditioning ...\n\nI understand well the benefits of preconditioning in different iterative schemes. My point was that the particular nature of your preconditioning was not well explained in a self contained manner. From my experience in sampling and inference most preconditioniers that I understand well are either linear or carefully constructed (e.g. reimanian ULA by Girloami and Calderhead), preconditioning with flows on the otherhand is a bit more contemporary and should merit a self contained exposition rather than blame/point the reviewer to reference 14. \n\n> ... reviewer is questioning the motivation and arguing that the method is not useful or wrong,...\n\nAgain this is another comment that is a bit defensive trying to discredit the review, rather than leveraging a discussion. I want to highlight some issues with this response:\n\n1. Even if empirical results are good, motivation is key. Empirical ablations whilst incredibly helpful should not be strong enough to make closing remarks on the lines of \"my results are sota the reviewer should not criticise the preimse of the method\". First of all since there are many other methods missing in the comparison, and the ablation is actually somewhat poor. ULA based samplers also admit preconditioniers a proper ablation of your comparing to these could have helped close the argument. Additionally there are several methods (e.g. MCD [1], PIS [2,3] ) which combine langevin dynamics with neural samplers which are likely to outperform this approach (or be similar), so \"good results\" is really an insuficent argument, (also note the theory / gaurantees behind some of these methods is heavily and well motivated in their respective manuscripts). \n2. I want to re-iterate my main and central point. I never said the method was \"wrong\", I said that starting from langevin dyanmics which is something that we can construct a consistent approximation to (euler mayurama) with well undesrstood convergence rates, why would I transform this theoretical nice and well behaved SDE into an ODE that involves a score term which is very difficult to approximate, you are introducing a new source of bias / error into the system and loosing gaurantees. To give a very simple analogy imagine I have a linear model solvable via a pseudo-inverse (dxd) but instead I reinterpret it as a kernel method in a linear RKHS and solve by computing its gram-matrix (NxN) with N>>d , why would I trade for a representation that is much more taxing computationally and theoretically yields the same method ? In gen modelling settings (where we have available samples) it does make sense working with the deterministic system faster convergence as you motivated but also since the score admits incredibly sound (statiscally) and computational approximations.\n3. Finally to re-iterate I am also not saying the proposed preconditioning is wrong, but the way its presented it is confusing (and led me to think it was unclear/adhoc). \n\n", " Q1: we cite Song et al several times (ref 6), including in key equation 3. Moreover, Song et al. is preceeded by ref 3 Maoutsa et al (as acknowledged in Song et al), which we refer to and compare against in introduction and in appendix A. \n\nQ2: we have dropped the time index, ie we wrote x instead of x(t), in places where we assumed this will not cause confusion. In this we have followed Maoutsa et al (ref 3), where it is also dropped between their eq 4 and 5. We refer to that paper for an alternative derivation that demonstrates the validity of our equations. We will clarify and improve upon our time notation.\n\nQ3-5: the reviewer has concerns about the preconditioning with NF. We refer to Hoffman et al (ref 14) for a more detailed motivation and \ndiscussion of the method. To summarize it: one can work in any parameter basis using a bijective map of parameters from x to z. One very simple basis is NF latent space, where q_t(z) is N(0,1), and grad V(z)=z. The target p(x|y) and U(x) is also transformed to p(z|y) and U(z) (eq 6). There are no approximations or assumptions in this procedure, and there is nothing ad-hoc about it: if one accepts the validity of a single step DLMC update in space x then the validity of a single step DLMC update in space z follows immediately, it is just a parameter transformation. \n\nAs for motivation, it is the same motivation as with any preconditioning, which is to reduce the condition number of gradient based methods: for high condition number problems gradient based optimization is slow because of zig-zagging along the narrow potential ravine. Second order methods improve upon this by using inverse Hessian information. A generalization of second order method is to allow spatially varying Hessian. NFs can thus be viewed as powerful preconditioners that go beyond Hessian type preconditioning and allow spatially varying Hessian for an even faster convergence of gradient based optimization. \n\nContrary to reviewer statement p(z|y) is NOT Gaussian N(0,I) until we converge, ie at the end of DLMC. Prior to that it is whatever eq 5 gives for it. \n\nThere is no stochastic noise in our procedure (it is deterministic), so we are unclear why the reviewer talks about stochastic noise, simulating Gaussian noise, stabilizing noise procedure, writes down Wiener noise term dW_t etc. None of this is relevant for what we do. The reviewer is correct that NF is the essence of our method, as we state in introduction, but there is no adhoc noise stabilizing procedure in our method. \n\nIt is perhaps worth pointing out that the NF changes at every iteration, ie q_t(x) depends on t since particles move. So while q_t(z)=N(0,I) (in the infinite N limit and assuming NF is a universal approximator that has converged) for any time t, grad U(z) is changing with time since the NF map is changing and thus the Jacobian of NF J=|dx/dz| is changing in time. So eq. 8 should not be viewed as some fundamental \nequation to be solved on its own, but rather a rewrite of equation 3 in the NF latent space basis, which changes with every iteration step. \nWe do this so that the gradient descent is accelerated because the condition number has been reduced. \n\nWe did not include algorithm for latent space because it is a simple replacement of equation 3 with eq 7 and 8. We will include it in revised version, as it is a one line change of the Algorithm 1.\n\nWe do perform ablation study of the impact of this preconditioning, this is shown in fig 8 where we show convergence is faster in NF latent space. As the reviewer correctly points out one can implement DLMC without this latent space preconditioning, but we find the convergence is faster with this step. \n\nWhen going from the Langevin to the deterministic version we trade the stochastic nature of Langevin MC, which is a random walk and thus slow, for the deterministic density term, which converges faster, but relies on our ability to determine the density from the particles accurately. This is a well defined trade-off and in our examples we show DLMC outperforms HMC, which outperforms LMC in cases we study here. We are trading random walk of LMC with NF approximation error at a finite N, and we show that the tradeoff is in favor of DLMC, typically by one order of magnitude or more for sequential version, and many more orders for parallel version, in the limit of expensive likelihoods, where the cost of NF evaluation is small compared to the likelihood cost. We clearly show that the method converges to the correct posteriors with far fewer likelihood evaluations than state of the art baseline HMC, so our empirical results confirm the theory behind the method is sound. We perform ablation studies in appendix which show that preconditioning is useful and that we converge faster. The reviewer is questioning the motivation and arguing that the method is not useful or wrong, but the empirical evidence suggests otherwise. We will expand the discussion to clarify these points better. ", " Q1: DL and MH steps are independent of each other, and each one separately converges to the true distribution under some technical conditions discussed below. They are unrelated to each other, and we can mix and match DL and MH steps, there is no need to view them in sequence. \n\nDL converges to the true distribution in the limit of large number of particles N and assuming NF is a universal approximator. See Moutsa et al. (ref 3) for related discussions in the context of KDE density. Under these conditions and for sufficiently small step size (related to the condition number, similar to the standard gradient based step size condition) the DL method converges, since the particles move only if the current NF density differs from the target, and the particles move in the direction of reducing the discrepancy. Once NF equals the target the gradient difference is zero (grad(U-V)=0) and particles no longer move. It is difficult to obtain any theoretical convergence rates \nfor a finite N. However, it is generally true that at a fixed N NF is a better density estimator than KDE, which underlies Maoutsa et al. This is shown in appendix A, where we compare the performance of the two methods. \n\nDL only converges to the true distribution for simple unimodal distributions, and not for multiple disjoint components. This is because DL alone will get the particles into individual peaks, but since it is gradient based it will not guarantee that their relative proportions are correct, and once they are trapped inside a peak DL cannot move them out. This is true for most samplers such as MH, HMC, LMC, which cannot handle multimodality in the absence of a separate algorithm such as annealing or tempering. With the addition of MH step we remove this limitation without having to resort to annealing or tempering, and we get it for free since NF is part of the algorithm already. MH step is thus needed in multi-modal settings for asymptotic convergence guarantees. MH step follows the same rules as other Metropolis-Hastings implementations and has the same convergence criteria: if we do MH step many times in a row we will converge. We found no need to do it in our examples, but can be done for additional convergence guarantees. \n\nQ2, 4: The reviewer is absolutely correct about the acceptance rate, which can be very low, especially in initial stages of the algorithm when \nthe actual particle distribution is broader than the target. However, the role of MH step in these stages is simply to remove the worst particle performers. One can for example think of these as being stuck in a local maximum far from the global maximum, or more generally view these poor performers as particle optimization being dependent on initialization. MH removes these worst performers and replaces \nthem with particles with a higher likelihood. We found that this really helps with the performance of DLMC, even though the MH acceptance rate may be low. We can add a figure showing how acceptance rate changes with number of iterations, but we stress that low acceptance rate of MH does not indicate slow convergence in DLMC, since it still helps the overall convergence rate of DLMC. \n\nQ3: We apologize for lack of clarity, all the examples assume N>2d, so the comment is meant in this context. We have not explored N<2d (e.g. N=1), since in this limit NF fails to converge to a meaningful density distribution. Ou remark was meant to convey that there is a broad range of values of N where the overall cost is approximately constant in the serial mode. In the parallel mode (P-DLMC), choosing a higher N is almost always beneficial, but we only tested this up to N=2000. \n\nQ5: P-DLMC: we have implemented embarrassingly parallel version of DLMC on NERSC. In our version we do not parallelize the individual likelihoods, but instead we run each likelihood on an individual core, with N cores, where N is number of particles. So for the example we show this is still 60 seconds for each likelihood, and the NF cost of 1 second is small compared to 60 seconds. Our implementation seems to differ from the reviewer's example, where the reviewer argues parallelization makes the cost of a single likelihood reduced to near zero: in our approach the speedup we obtain is close to N and is not limited to 60 in this example. Since all the cores are performing the same likelihood evaluation (with different parameter values) their evaluation time is nearly the same, so there are no load balancing issues. We included the overhead associated with communication and NF cost in our plot. ", " Q 1: VI with NF has been tried in many previous papers, and the overall conclusion is that it often does not converge to the true target, even when the approximating NF function is sufficiently powerful. One example is Hoffman et al paper (ref 14), where they pretrain using VI with an enormous training time (they use 20 million likelihood calls). They still need to follow up with the preconditioning method, using HMC sampling in NF latent space. Another example is Modi, Li, Blei https://arxiv.org/pdf/2206.15433.pdf, where they point out the mode seeking nature of reverse KL divergence underlying VI leads to mode collapse (i.e. too marrow posterior) even though NF is sufficiently powerful. \n\nWith regard to Q1(b) SINF+MH, we perform this ablation study in the supplementary material, fig. 9. We show DL step is required for a fast convergence. We believe these examples show that particle dynamics based NF methods are superior to pure VI+NF. \n\nQ2: NUTS is a state of the art baseline that is widely used (e.g. in STAN, PyMC3 etc.). We have followed the recent Hoffman et al (ref 14), which only compare to NUTS. We are not aware of any other sampler that consistently outperforms NUTS in terms of evaluation time.\n\nQ3: Most of our examples are 50-100 dimensional, with very non-Gaussian posteriors. They are common and they are not easy: most samplers struggle on these examples and fail, or take a very long time. We followed recent literature like Hoffman et al (ref 14) in terms of examples and dimensionality. We have evidence that our method works on higher dimensional problems with simple unimodal posteriors, and we can add an example if the reviewer specifies the specifics (target and dimensionality). \n\nQ4: When it comes to multimodal problems there is no method that can discover the true global maximum: global non-convex optimization is an unsolved problem. Moreover, the no free lunch theorem of global optimization informs us that there is no single method that can be fast and correct on all examples: one either chooses to do expensive exploration, wasting computational time on simpler problems, or one chooses to do fast exploitation, risking to miss other peaks on hard problems. Given that one can come up with an arbitrarily hard problem we are unsure what are we supposed to show, but we'd be happy to do another example if the reviewer specifies it. What we wanted to emphasize here is that our method can handle multimodal posteriors with no additional modifications to the algorithm, in contrast to standard samplers (MH, HMC, LMC), which cannot handle multimodality without additional methods such as annealing/tempering. To demonstrate this a simple double peak example suffices. \n\nWe will fix the parentheses and clarify better the time index notation. ", " The authors propose a mechanism to produce unbiased samples from expensive posteriors given that it is easy to sample from the prior and possible (although expensive) to evaluate the likelihood. The approach proposes concrete particle dynamics based on NFs that approach the target distribution and corrects them using Metropolis-Hastings. Strengths:\n- Technically correct\n- Clear presentation\n- Original dynamics\n- Uses the gradient information of the target distribution \n- Produces multiple uncorrelated samples\n\nWeaknesses:\nExperimental validation, in terms of:\n- Complexity of the models sampled from\n- Strength of the baselines\n- Ablations 1. It would seem that a lot of the power of this approach is coming from the strong modeling ability of the NF. I am missing comparisons with simpler approaches that can leverage this power. For instance:\n\n(a) What if we simply run variational inference to optimize the normalizing flow? One can easily compute the gradients with respect to the parameters. (For the SINF case, it might be possible to modify its main algorithm to take an unnormalized log-pdf instead of a set of samples - but the gradients for variational inference can always be approximated stochastically in a NF). Then one can find the closest q(z) within the NF domain and then sample from it and correct using MH. This decouples the number of particles used for learning and the ones used for sampling: More independent samples can be generated on demand after training. And does not require any particle dynamics.\n\n(b) What if we only use SINF and MH? MH produces a new set of samples closer to the posterior, then we use those samples and SINF to update q(z) in closed form. Then we repeat. Again, we have eliminated the separate particle dynamics.\n\nWhile (a) and (b) might be significantly worse than the proposal of the paper, it'd be reassuring to know that this is the case, and that all the heavy lifting is not being done by the NF being able to learn the energy function. Otherwise, the proposed dynamics don't seem justified enough.\n\n2. The experiments in which the posteriors are calculated are pretty simple. No strong baselines for those models are used, just NUTS which of course is slow.\n\n3. The tested models are _very_ low-dimensional. The high dimensional setting might have very poor performance, this is not known.\n\n4. I expect multimodal likelihoods with broad priors to result in a lack of mixing - I doubt all modes will be discovered even in low dimensional cases, when they are spaced and sharp. Again this is not tested even in a 2-dimensional experiment with many gaussians that are well separated. \n\n\nMinor comments:\n- Parentheses are used for bibliography citations, should be square brackets, not to be confused with cited equations.\n- In the MH update in Algorithm 1, time indices are missing.\n\n[1] Black Box Variational Inference. Rajesh Ranganath et al.\n\n\n Addressed in the previous section.\n\n\n", " The work proposes a Bayesian inference procedure constructed using a deterministic Langevin equation. The resulting algorithm uses a number of particles initialized at random, their dynamics then following the deterministic equation toward its equilibrium distribution, while at regular time intervals a normalizing flow is fit to the collection of particles and used as an independence proposal within Metropolis-Hastings to propose replacing each particle with a new draw. The method is demonstrated on a number of toy examples and standard data sets.\n There are some interesting aspects to this work, such as the use of the deterministic Langevin equation, and the empirical results are quite good, especially the ablation studies in the supplementary material that separate out the various components that come together in the final algorithm. There are some gaps, however, that are stopping me from recommending acceptance.\n\nWhile I think I understand the construction of the algorithm, it is not clear to me that this results in samples distributed according to the target distribution, even up to the approximations introduced (e.g. fitting the normalizing flow to the current stock of particles, time discretization). With that said, the empirical results suggest that the algorithm works as intended numerically; if there is a bias it may be small. I've added some questions for the authors on this topic below and hope they can help me to better understand their work.\n\nIt is unclear to me how the results for P-DLMC have been obtained (e.g. in Figure 3). The authors note that these are based \"...on the assumption that we can evaluate the likelihood gradient in parallel on N CPU cores\" (bottom p.7). Is there actually a parallel implementation here? In my opinion, unless there is a parallel implementation to provide empirical results, this ought to be an item for the discussion, not the experimental section. Otherwise, one is not accounting for the real-world challenges of parallel computing (fewer than N cores, possibly variable execution time between particles leading to load balancing issues, communication overhead in gathering all particles to update the normalizing flow, etc). \n\nIf it is a model rather than actual implementation, have the authors simply divided through by N? Or have they used a model where only some p portion of the sequential program can be parallelized, thus using a scaling factor such as (1 - p + p/N)? There is a comment about the assumption that evaluating the likelihood gradient takes about a minute, while the cost of the normalizing flow is in the seconds, but even that limits the maximum speedup to much less than N (say 60 seconds likelihood gradient parallelized down to essentially zero, 1 second for normalizing flow, that still limits speedup to factor of 61, i.e. 61 seconds down to 1 second, even when N >> 61). These are some of the challenges of scaling these sort of population/ensemble algorithms in practice (SMC has similar challenges).\n * Before the Metropolis-Hastings step, aren't the particles only approximately distributed according to the target distribution, given time discretization to get them there, and applying the Metropolis-Hastings kernel will not leave them properly distributed, either? This is not the case with HMC or MALA, say, where the time-discretized move is itself the proposal (over the current state of the Markov chain), and the Metropolis-Hastings rule accepts or rejects it. Here it seems there is a particle drawn from the time discretization, and a proposal drawn from the normalizing flow fit to the collection of those particles, and we accept or reject the swap. Perhaps the authors could better explain how this yields samples from the target distribution, or clarify the claims (e.g. first paragraph of conclusion claims \"To make it asymptotically unbiased we add MH step...\").\n\n* Independence proposals rely on the proposal being a good approximation of the target distribution in order to have a reasonable acceptance rate, and so for the Markov chain to mix well. I can accept that a normalizing flow can achieve a good approximation with sufficient size, but I do not see any empirical results on this part of the algorithm, e.g. what is the average acceptance rate of this step in the examples?\n\n* Bottom p.7 there is a comment that, \"...we find there is a tradeoff between number of particles and number of iterations, such that starting with a larger number of particle does not always imply a higher overall computational cost.\" I can see how this might be the case, but can also seem the limits of it, as in e.g. the extreme case of reducing to one particle. A plot showing this tradeoff would be a good addition.\n\n* As above, specific numerical results on the swap moves (e.g. acceptance rate) would be a good addition.\n\n* As above, some clarity on the presentation of the parallel version.\n No concerns.", " The authors of this work propose using the deterministic “dual” of the langevin equation for simulating from a target posterior. This dual presents difficulties since it requires estimating the score of the FPK equations solutions, to do this the authors propose using normalizing flows, furthermore , the authors “precondition” the flows in latent space via simulating what seems to be latent Gaussian dynamics at equilibrium, this “preconditioning” improves/stabilizes the overall procedure. Pros:\n1. A sound methodology for simulating the deterministic “dual” of the Langevin equation using normalizing flows.\n2. Sound and successful experimental procedure.\n\nCons:\n\n1. The overall guarantees of the method (theoretically) are not clear and there's no acknowledgement if the method is mostly heuristic. \n2. Furthermore, intuition behind the method's preconditioning step is also not presented clearly. There are some potential conceptual/theoretical issues illustrated by the reviewer in point 3 (with a potential contradiction).\n3. Lack of motivation when moving from langevin to its stochastic counterpart.\n Comments/Suggestions:\n\n1. The deterministic Langevin Equation is well known and used in the diffusion score matching literature [1] (see Eq X) and should be acknowledged appropriately and maybe briefly discussed. Notice the velocity term coincides with the score. \n2. Equation 2 is broken U(x) should be U(x(t)) in order to be consistent, however this is also wrong, the FPK equation is defined pointwise for all x \\in R^d that is its defined as q(x,t) or q_t(x) , it's not only defined pathwise on x(t), this notation is inaccurate. The f(x(t)) notation should be reserved for the deterministic langevin equation, but the FPK equation is defined pointwise. Across the whole paper the usage of q(x), q(x(t)) and x vs x(t) in general is quite inconsistent.\n3. From flows setup you would expect p(z|y) to be Gaussian but this is never made clear, instead you refer to \\pi(z) being Gaussian which was never explicitly defined in the context of p(x|y) ? let's assume that in your notation \\pi(z) = p(z|y) = N(0,1) then indeed the score of q_t(z) is infact -z (i.e. V(z)=z) however this implies that z(t) evolves entirely according to linear Gaussian dynamics , not only that but if the score is indeed -z this implies that q_t(z) = N(z; 0,1) which is at equilibrium at does not depend on time whatsoever. What is the point of simulating gaussian noise at equilibrium in latent space ? Formally I cannot see how this can obtain finite time estimates of q_t(x) (t< \\infty) \n4. To clarify the previous point a bit further if indeed V(z)=z then this implies directly that -ln q_t = || z||^2/2 which implies directly that q_t is an independent Gaussian where q_t(z) solves the FPK equation for the density implied by the ODE dynamics in Eq 8. Remember U(z) and V(z) cannot be chosen independently, given U(z) then grad(-U(z)) implies an FPK equation where V(z) depends on the solution to that equation. One can't just go oh let V(x) = ln Gaussian without first proving that q_t(z) = Gaussian.\n5. Continuing the line of reasoning dz_t =grad(-U(z))dt +dW_t should have as its steady state N(0,1) if z_0 where samples from \\pi(z)=p(z|y) then in fact V(z) = z can easily be derived, but again you'd be just simulating an OU process at equilibrium … What is the advantage of this? Can you formally prove an advantage of doing this? Or at least motivate it conceptually ?\n\n\nQuestions on methodology:\n\n1. In your algorithm you don't have any pseudocode for the z(t) dynamics ? Why is this ? I found this quite confusing.\n2. For estimating q_t(x) you just say run normalizing flows on x(t) ? so here you just do density estimation with normalizing flows correct ? (you learn flows in the standard way) . It really feels this is the essence to the method, all that is being done is running normalizing flows to estimate q_t(x) and then V(x) and then some adhoc stabilizing noise procedure is done with a lack of motivation. \n3. How is the preconditioning used then ? you do some extra gaussian dynamics in latent space with the learned flows to stabilize things a bit ? (Please clarify and give more motivation)\n4. I can't seem to find an ablation for the effect of preconditioning/simulating the Gaussian, what effect does the adhoc preconditioning routine have? You can still use NF to fit the score without this step.\n Overall there are clear counter points in this work that lack detailed discussion:\n\n1. When going from Langevin to the deterministic version, more approximation is required, in sense it is a harder problem, whilst this allows for a hybrid version between ULA / normalizing flows, the authors do not motivate well why one would want to pursue such a problem (which requires more difficult approximation schemes).\n2. The whole derivation of the latent dynamics in the flow/gaussian space seems lacking and also is motivated. At the end of the day the equations you have seem to simulate Gaussian dynamics at equilibrium, this is either not particularly useful or wrong (if somehow they are out of equilibrium in practice). \n\n\n\n[1] Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S. and Poole, B., 2020. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "mBC_HqBFlHH", "aqR-iP9VvZv", "CfyleVPIyTV", "EPmfgws6WKL", "KnlnpQk4OQA", "vPIoQV978EN", "l1sD92UFFL", "nips_2022_kEPAmGivMD", "nips_2022_kEPAmGivMD", "nips_2022_kEPAmGivMD" ]
nips_2022_CZwh1XdAhNv
Uncoupled Learning Dynamics with $O(\log T)$ Swap Regret in Multiplayer Games
In this paper we establish efficient and \emph{uncoupled} learning dynamics so that, when employed by all players in a general-sum multiplayer game, the \emph{swap regret} of each player after $T$ repetitions of the game is bounded by $O(\log T)$, improving over the prior best bounds of $O(\log^4 (T))$. At the same time, we guarantee optimal $O(\sqrt{T})$ swap regret in the adversarial regime as well. To obtain these results, our primary contribution is to show that when all players follow our dynamics with a \emph{time-invariant} learning rate, the \emph{second-order path lengths} of the dynamics up to time $T$ are bounded by $O(\log T)$, a fundamental property which could have further implications beyond near-optimally bounding the (swap) regret. Our proposed learning dynamics combine in a novel way \emph{optimistic} regularized learning with the use of \emph{self-concordant barriers}. Further, our analysis is remarkably simple, bypassing the cumbersome framework of higher-order smoothness recently developed by Daskalakis, Fishelson, and Golowich (NeurIPS'21).
Accept
Given the unanimous support from the reviewers, to which I genuinely agree, the paper is recommended for acceptance. I encourage the authors to pay close attention to the reviewers comments and suggestions (and in particular to the comments of Reviewer Wcat) when working on their final version.
train
[ "4YRAzl0L67", "IpDjJAovtgzV", "yKW4kNEZkO2", "3Ku18-sDD_p", "yXnmIfighWQ", "MYultwGy67", "u62oa6K6FmL" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their response.\n\n--- \"While the regret bound of Daskalakis et al. (2021) depends logarithmically on the number of actions...\"\n\nIndeed I missed the fact that for swap regret, the rates must scale polynomially with the number of actions as opposed to external regret - so I retract my comment regarding this issue.\n\n--- \"This is a great question, and, in our opinion, the most important open question stemming from our work...\"\n\nI find the authors' comment on the possibility of Optimistic Hedge enjoying a bounded second order path length very interesting. Since they mention that it is the most important question stemming from their work, I would expect some more elaborate discussion on this manner and the relevant future research question, as I didn't find a noticeable mention of it in their paper.", " We thank the reviewer for the helpful feedback. Below we address the reviewer’s questions. \n\n--- *“The following is only a minor issue, though I think it's worth mentioning. The regret bound presented in the paper suffers from a polynomial dependence on $m_i$, the number of actions available to each player. This is in contrast to the previously best known regret bound of Daskalakis et al. (2021) in which the dependence on the number of actions is only logarithmic. Presumably this dependence on $m_i$ stems from the fact that each player i uses an algorithm comprised of [...]”*\n\nWhile the regret bound of Daskalakis et al. (2021) depends logarithmically on the number of actions, that bound applies only to external regret. In contrast, for swap regret all the known bounds depend polynomially on the number of actions since, as the reviewer points out, each player employs one external regret minimizer for each action. This discrepancy between external and swap is, in fact, inherent as is evident by the $\\text{poly}(m)$ lower bound of Blum and Mansour (2007). \n\n--- *“The method of analysis presented in the paper naturally raises the question of whether or not the Optimistic Hedge algorithm (which has been shown by Daskalakis et al. [2021] to obtain $\\text{polylog}(T)$) regret enjoys a bounded path length property similar to the property given in Theorem 4.4 for BM-OFTRL-LogBar. If this was indeed the case, then combining the RVU property of optimistic hedge with this bounded path length would yield a logarithmic regret bound, with a considerably simpler algorithm. My question for the authors regarding this observation is whether or not they looked into the possibility of optimistic hedge having log-bounded path length, or rather is it maybe straightforward to see that it does not have this property?”*\n\nThis is a great question, and, in our opinion, the most important open question stemming from our work. Based on some empirical evidence, it seems that Optimistic Hedge enjoys a constant second-order path length bound. Should that be proven, that would immediately imply the first $O(1)$ regret bound for each player. However, the technique we employ for obtaining an RVU bound for swap regret does not apply for Optimistic Hedge since Lemma 4.2 crucially uses the local norm induced by the log-barrier. So, analyzing Optimistic Hedge within our framework would likely require new ideas.\n\nReferences \n\nBlum and Mansour (2007): From external to internal regret.\n\nDaskalakis et al. (2021): Near-optimal no-regret learning in general games.", " We thank the reviewer for the helpful feedback. Below we address the reviewer's questions. \n\n--- *“I would think it as a little overclaim by saying the framework bypassing the cumbersome framework [...]”*\n\nFirst, we clarify that the analysis of Daskalakis et al. (2021) gives a logarithmic dependence on the number of actions, but their bound applies only to *external regret*. On the other hand, for *swap regret* all known results require a poly(m) dependence; note that there is in fact a $\\text{poly}(m)$ lower bound for swap regret due to Blum and Mansour (2007). Now the framework of Daskalakis et al. (2021) subsequently led to a refined analysis of the algorithm of Blum and Mansour with an $O(n m^4 \\log m \\log^4(T))$ regret bound (see Anagnostides et al. (2021)), which we improve both in terms of the dependence on m and T. It is in that sense that we bypass their framework, but we were definitely not attempting to understate in any way the breakthrough result of Daskalakis et al. We will clarify this.\n\n--- *“In Table 1 [...]”*\n\nThanks, we will modify the entry as we see how it can cause confusion.\n\n--- *“If one only need to converge to coarse correlated equilibrium, could the algorithm in this paper (perhaps without the BM modification) give better regret bound with dependency on [...]”*\n\nThat’s a very good question. For coarse correlated equilibria, it is direct to see that our framework gives a linear dependence on $m$, instead of $m^{5/2}$ we have in the paper. The basic idea is that, from the perspective of Phi-regret, one can consider the set of deviations Phi that includes all constant transformations, along with the identity transformation; that guarantees nonnegativity for the regret. We chose to focus on swap regret because of its important implications for correlated equilibria, as well as the fact that it is the strongest notion of hindsight rationality in games. We will make sure to elaborate on these nuances in our revised version.\n\n--- *“Do we have a good explanation for why some plots in Figure 2 oscillates? and why the plot (2,2) seems not to scale with $O(\\log T)$ at all?”*\n\nFirst, regarding the oscillations, since these experiments are performed on general-sum two-player games, we do expect (in general) oscillatory behavior; otherwise, the dynamics would converge to Nash equilibria, which in general-sum games is precluded by many impossibility results. Indeed, notice that if no-swap-regret learning dynamics converge pointwise, then the limit point has to be a Nash equilibrium—since it is a correlated equilibrium that also happens to be a product distribution. \n\nIt is hard to say why the (2,2) plot does not appear to scale with $O(\\log T)$, but it is not surprising that in some games the performance we observe is slightly better than the theoretical worst-case guarantee our analysis provides.\n\n--- *“The regret bound only scales with $m^{5/2}$, making it worse than the bounds in previous work.”* \n\nAs we highlight in Table 1, our regret bound in terms of $m$ is actually the best known near-optimal guarantee using the fundamental algorithm of Blum and Mansour (2007). Note that Anagnostides et al. (2021) obtained only a dependence of $m^4 \\log m$ in terms of $m$ for the algorithm of Blum and Mansour, although we note that they also analyzed a different algorithm, due to Stoltz and Lugosi, that led to an $m \\log m$ dependence.\n\nReferences \n\nBlum and Mansour (2007): From external to internal regret.\n\nDaskalakis et al. (2021): Near-optimal no-regret learning in general games.\n\nAnagnostides et al. (2021): Near-optimal no-regret learning for correlated equilibria in multi-player general-sum games.", " We thank the reviewer for the helpful feedback. Below we address the reviewer’s questions.\n\n--- *“In Table 1, for fair comparisons, perhaps the algorithm “SL-OMWU” from (Anagnostides et al., 2021) should also be mentioned?“*\n\nOur intention was to highlight prior results about the algorithm of Blum and Mansour (2007), but we will also include the bound regarding the algorithm of SL-OMWU in our revised version as we agree with the reviewer that it might cause some confusion. \n\n--- *“I feel like the importance of the “nonnegativity of regret” should be emphasized even more in the presentation”*\n\nWe will make sure to emphasize more the importance of nonnegative regret in our revised version, as we agree with the reviewer that this simple observation is at the heart of our technique. \n\nReferences\n\nBlum and Mansour (2007): From external to internal regret.\n\nAnagnostides et al. (2021): Near-optimal no-regret learning for correlated equilibria in multi-player general-sum games.", " This paper studies accelerated no-regret learning algorithms for minimizing the swap regret in multi-player normal-form games. The main contribution is an algorithm that enjoys $O(\\log T)$ swap regret when employed by all players simultaneously, which improves slightly over the prior best result $O(\\log^4 T)$. The algorithm BM-OMWU-LogBar combines the classical Blum & Mansour’s no-swap-to-external reduction (henceforth BM reduction), and Optimistic Follow-The-Regularized-Leader (OFTRL) with log-barrier regularization for external regret minimization.\n Strengths:\n\nThis paper is an addition to the line of work on accelerated no-regret learning in normal-form games, when all players deploy the same no-regret learning algorithm. This type of results imply faster than $1/\\sqrt{T}$ convergence to equilibria in games and thus could be of interest to the online learning / games community. \nIn particular, the result can be seen as a follow-up to the recent breakthrough of $O(\\log^4 T)$ regret of Optimistic Hedge in multiplayer general-sum games (Daskalakis et al. 2021) and $O(\\log^4 T)$ swap regret in the follow-up work of (Anagnostides et al. 2021). \n\nThe main strength of this paper, in my opinion, is a new proof route of this type of result using the nonnegativity of individual regrets plus the log-barrier regularization. The proof route appears to be much simpler than the above existing work. \n* The “nonnegativity of individual regrets”, which holds for the swap regret but not the usual external regret, allows the individual swap regrets for each player to be bounded by their total (summed) regrets, which is much simpler to bound. For external regret, this lack of nonnegativity was precisely the reason why $O({\\rm polylog}(T))$ regret in multiplayer general-sum games (Daskalakis et al. 2021) was much more challenging to establish than two-player zero-sum games (Rakhlin & Sridharan 2013). \n* The log-barrier regularizer induces an RVU bound whose “negative iterate stability” term directly controls the stability of the fixed-point operation within the BM reduction (Corollary 3.2 & Lemma 4.2), which can then directly bound the \"positive loss stability\" term within the RVU bound when summed over all players\n\nOverall, I think this line of arguments is new to this line of work (even though the two facts above are standard / known), and substantially simplifies the challenge of establishing $O({\\rm polylog}(T))$ regret over the arguments of (Daskalakis et al. 2021, Anagnostides et al. 2021).\n\nWeaknesses:\n\nThe improvement from $\\log ^4T$ to $\\log T$ is perhaps a bit incremental. There is also an improvement in the action ($m$) dependence over the prior algorithm BM-OMWU, though I think another algorithm SL-OMWU in the same work achieves even better action dependence already? (see “Questions” below.)\n In Table 1, for fair comparisons, perhaps the algorithm “SL-OMWU” from (Anagnostides et al. 2021) should also be mentioned? That algorithm achieves $O(n\\log m\\log ^4 T)$ internal regret, and thus $O(nm\\log m\\log^4 T)$ swap regret, by the standard bound SwapReg <= m * max{InternalReg, 0}.\n\nI feel like the importance of the “nonnegativity of regret” should be emphasized even more in the presentation. Right now, this fact is only briefly mentioned in Line 93-97 & 261-262. In my opinion, this is precisely what allows the proof to be much simpler than (Daskalakis et al. 2021), and closer to the original arguments of (Rakhlin & Sridharan 2013, Syrgkanis et al. 2015). \n\n---\nAfter rebuttal: I thank the authors for their response, and would be glad to keep my current evaluation of the paper.\n /", " \nThis paper studies how to design an uncoupled online learning algorithm to learn a correlated equilibrium, a fundamental problem in online learning and game theory. This paper develop a novel framework to analyze Optimistic FTRL that bypasses the high-order differentiation framework of [DFG21]. It gives regret bounds with better $T$ dependency compared to previous work.\n \n\nStrengths:\n1. This paper gives the first regret bound that only scales with $O(\\log T)$, improving upon the previous best $O(\\log^4 T)$ dependency. \n2. The algorithm in this paper maintains both $\\log T$-type swap regret and $\\sqrt{T}$-type adversarial regret, which is desirable from both theoretical and empirical perspectives.\n3. Synthetic experiments are conducted to illustrate the theoretical results.\n4. The paper is well-organized and the writing is good. I did not find typo.\n\nSee Limitations for Weaknesses. \n1. I would think it as a little overclaim by saying the framework *bypassing* the *cumbersome* framework of [DFG21]. First, [DFG21] could give regret bound that only scale with $m$ while this paper requires $m^{5/2}$. Therefore, I think it is inappropriate to say that the framework could bypass the framework of [DFG21], because it could be possible (though might be unlikely) that the framework of [DFG21] could not be bypassed to show regret bound that only scales with $m$. \n3. In Table 1, $O(\\sqrt{m \\log mT})$ should be written as $O(\\sqrt{m T \\log m})$ in order to avoid confusation with $O(\\sqrt{m \\log(mT)})$.\n4. If one only need to converge to coarse correlated equilibrium, could the algorithm in this paper (perhaps without the BM modification) give better regret bound with dependency on $m$ smaller than $m^{5/2}$? \n5. Do we have a good explanation for why some plots in Figure 2 oscillates? and why the plot (2,2) seems not to scale with $O(\\log T)$ at all? \n1. The regret bound only scales with $m^{5/2}$, making it worse than the bounds in previous work. Though the $\\log T$ result is strong, from a learning perspective, it is more important to optimize dependency on polynomial factors instead of poly-log factors. I think the author acknowledged this. ", " The authors consider the problem of no-regret learning in general-sum multiplayer games. They propose an algorithm for each player which guarantees that the individual swap regret after $T$ rounds is bounded by $O(\\log T)$, improving upon the previously best known bound of $O(\\log^4 T)$ in general-sum games. This in turn leads to an algorithm for computing an approximate correlated equilibrium in general-sum games. The proposed algorithm for each player is decentralized, in the sense that every player only needs to observe her own losses in order to update her policy after each round, and is oblivious to the actions taken by other players. In addition, the authors establish adversarial robustness of the proposed algorithm, i.e. that each player is guaranteed $O(\\sqrt{T})$ regret when faced with an adversarial loss sequence.\n\nThe algorithm follows a swap-regret minimization paradigm introduced originally by Blum and Mansour (2007), instantiated over several instances of \"optimistic follow-the-regularized-leader\" with a log-barrier regularizer. The analysis in the paper mainly hinges on two properties of the learning dynamics: $O(\\log T)$-bounded second-order path length of each player's policy sequence, and an RVU property of the swap-regret minimization algorithm. Strengths:\n* The main result of this paper is highly non-trivial and strictly improves upon the previously known regret guarantees in general-sum games in terms of the dependence on $T$. In addition, this result is achieved by a simple and efficient decentralized algorithm.\n* The analysis of the proposed algorithm is novel, interesting and surprising in its simplicity compared to previous works. Specifically, the use of swap-regret minimization in order to guarantee non-negativity of the regret, which together with a non-trivial RVU property, yields a path length bound on the players' iterates, and in turn the desired regret bound.\n* The proofs of the technical claims presented in the paper are clear and the novel ideas in the analysis may be useful for future works.\n\nWeaknesses:\n* The following is only a minor issue, though I think it's worth mentioning. The regret bound presented in the paper suffers from a polynomial dependence on $m_i$, the number of actions available to each player. This is in contrast to the previously best known regret bound of Daskalakis et al. (2021) in which the dependence on the number of actions is only logarithmic. Presumably this dependence on $m_i$ stems from the fact that each player $i$ uses an algorithm comprised of $m_i$ instances of optimistic FTRL, and this causes the number of actions to appear as a multiplicative factor in the regret bound. I think the authors could briefly discuss this discrepancy compared to the previous result by Daskalakis et al., even though it is arguably a lot less substantial than the improved dependence on $T$. The method of analysis presented in the paper naturally raises the question of whether or not the Optimistic Hedge algorithm (which has been shown by Daskalakis et al. [2021] to obtain polylog($T$) regret) enjoys a bounded path length property similar to the property given in Theorem 4.4 for BM-OFTRL-LogBar. If this was indeed the case, then combining the RVU property of optimistic hedge with this bounded path length would yield a logarithmic regret bound, with a considerably simpler algorithm. My question for the authors regarding this observation is whether or not they looked into the possibility of optimistic hedge having log-bounded path length, or rather is it maybe straightforward to see that it does not have this property? See the main review." ]
[ -1, -1, -1, -1, 8, 8, 8 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "IpDjJAovtgzV", "u62oa6K6FmL", "MYultwGy67", "yXnmIfighWQ", "nips_2022_CZwh1XdAhNv", "nips_2022_CZwh1XdAhNv", "nips_2022_CZwh1XdAhNv" ]
nips_2022_fT9W53lLxNS
SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos
The visual world can be parsimoniously characterized in terms of distinct entities with sparse interactions. Discovering this compositional structure in dynamic visual scenes has proven challenging for end-to-end computer vision approaches unless explicit instance-level supervision is provided. Slot-based models leveraging motion cues have recently shown great promise in learning to represent, segment, and track objects without direct supervision, but they still fail to scale to complex real-world multi-object videos. In an effort to bridge this gap, we take inspiration from human development and hypothesize that information about scene geometry in the form of depth signals can facilitate object-centric learning. We introduce SAVi++, an object-centric video model which is trained to predict depth signals from a slot-based video representation. By further leveraging best practices for model scaling, we are able to train SAVi++ to segment complex dynamic scenes recorded with moving cameras, containing both static and moving objects of diverse appearance on naturalistic backgrounds, without the need for segmentation supervision. Finally, we demonstrate that by using sparse depth signals obtained from LiDAR, SAVi++ is able to learn emergent object segmentation and tracking from videos in the real-world Waymo Open dataset.
Accept
Three out of four reviewers provided positive reviews and scores for this submission. They agreed that SAVI++ makes meaningful improvements over a previously proposed SAVI model. Importantly, while most past approaches evaluate on synthetic data, this submission evaluates the proposed model on a real world dataset. The proposed model clearly improves over the baseline and a clear ablation analysis shows where the improvements come from. One reviewer had concerns about the evaluation using just one real world dataset. This was also brought up by other reviewers, who mentioned that the Waymo dataset has less diversity and fewer videos than others. While a more thorough evaluation would make this a stronger submission, the leap from synthetic evaluations to real world evaluations in this line of research is notable and sets the bar for future work. I also note, based on the discussion, that the employed dataset is not trivial and has several challenges for the model. Another concern by the reviewer was about missing baselines. The authors did provide additional baselines in their response. While these baselines do not exactly match the ones requested by the reviewer, I think they provide good evidence that the proposed method is able to employ the depth signal effectively. Overall, this paper makes solid progress on the problem, provides value to the readers and provides strong results on a real world dataset. Given these reasons, I recommend acceptance.
test
[ "EmkN7fQ7Yw", "Qow2QeN1us5", "Y1laGmH7ck", "EzxhNX1JH3dh", "QAOh2co0qq5", "aqJYLX2V97n", "8JTWl4RBcbV0", "U9sJnnbyeYK", "C2YjG_xK6w", "PReW7YEIU_", "cSZpU80SlJ6", "xMHjW67dp1x", "chFz6KmAN3w", "dBZjrMSEQYy", "y8LWgMxvfra", "OoSNH64uIB6", "Sbjqw-H1l6" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the authors rebuttal as well as the additional experiments. I thank the authors for the detailed response.\n\nUnfortunately, central issue of comparison against weak baselines is not resolved. \n\nIn particular, answers to Cons 4. (i)-(v) remain unsatisfactory to me. Dataset choice is not strong, and more segmentation datasets with more existing baselines should be shown. Results on MOVI-A,B is in fact poorer than previous method.\n\nMost importantly, the compared baselines are very weak; they don't even take into account 3d data. The main takeaway from the paper seems to be additional signals boost performance, which is interesting, but not particularly unexpected. Essentially, the paper is comparing apples to oranges; proposed method has access to additional data while baselines don't. This is why I suggested some trivial extension of ODIN to use 3d-maps. I don't believe this is outside the scope of the paper, instead integral to judging the paper.\n\nBasically, the authors need to show that the proposed method of using 3d-maps is in fact the best way, and no other simpler methods can work. If a direct late-fusion kind of technique gives comparable performance, then the proposed method has very little relevance. More data points are needed to adequately qualify the effectiveness of the method.\n\nAt this point, I would keep my score of 3, primarily because the experiments are not conclusive of the effectiveness of the proposed method. ", " We have discovered a potential issue with how the CoM distance is computed on Waymo Open in the conditional setting due to how empty segments are evaluated. The CoM metric we report in the paper (unintentionally) assigns a value of 0.0 in this case. We tested another version that assigns a value of 1.0 (corresponding to the maximum distance achievable in a single frame). The result we obtained for SAVi++ in the latter case is 7.2 ± 0.2 %, which is higher than the value of 4.4 % reported in the paper and higher than the value obtained for the BBox Copy baseline.\n\nThe choice of 0.0 does not affect our model comparisons and findings as it does not give any of the models a particular advantage, except for the BBox Copy baseline, which does not “predict” empty segments by design. Since it is unclear what the right cost should be for empty segments, this makes this particular comparison under this metric less interpretable. \n\nWe propose to measure this behavior using a separate metric, i.e. by measuring in what fraction of the cases an empty segment is predicted even though some part of a box remains visible, which will be included in the paper to complement our current CoM scores. Note that our BBox mIoU metric, which we similarly report for all conditional models, remains unchanged where also the benefit of SAVi++ over the BBox Copy baseline is clearly apparent.", " I thank the authors for properly addressing my concerns. I appreciate the additional metric for the unconditional setting and the effort they made to give some hints about the scenario with noisy depth. I agree that, given the limited rebuttal time, additional experiments using predicting depth are not possible, but it would be interesting to see them in the future. Following the authors response I will increase the score to 7.", " I appreciate the authors' effort to address my comments. Specifically, the new results on Waymo comparing with SAVi and Supervised SAVi++ strengthens the paper. The authors are encouraged to address remaining major limitations (e.g. relying on bounding box conditioning in the first frame, lacking results on real-world video datasets) in future work. ", " **“The proposed method performs significantly worse on MOVI-A,B (as shown in Table 1 of supplementary material). If the reason is indeed due to overfitting on simpler domains as claimed in suppl L26, there should be some experiment with same encoder.”**\n\nThank you for pointing this out. Indeed, our preliminary experiments indicated that this is the case, i.e. we observe overfitting on these datasets in the depth prediction setting, and using a simpler CNN encoder alleviates this. With MOVi-A/B being limited to simple geometric objects and our goal of scaling to real-world videos, we did not give this much further consideration. However, we agree that it could be useful to add this and will include this comparison.\n\n**“The compared baselines in Table 1. are quite weak. The authors should compare their model with other methods trained on depth maps, at least some naive extensions of other previous works to use 3d-maps (such as for ODIN [19]).”**\n\nThe baselines in Table 1 include SAVi and CRW, both of which are representative and strong baselines for the setting we consider. For example, CRW achieves strong performance on MOVI-E.\n\n\nIt would be interesting to adapt a method like ODIN to 1) produce more instance-centric segmentations (as opposed to segmentations that primarily focus on semantic aspects), and 2) to a (conditional) video setting. We believe that both of these points are out of scope for our work, but would be interesting for future papers to explore.\n\n**“The authors should show fine-tune performance on other segmentation tasks such as coco and pascal.”**\n\nAs we highlighted previously, the focus of this work is not on transfer learning or domain adaptation, hence, while we believe this type of analysis is an interesting future direction, it is out of scope for this work.\n\n**“How important is the high resolution in waymo dataset?”**\n\nWe do not find this to be very important: A resolution of 128x192 was sufficient to obtain good performance for SAVi++ on Waymo. In the interest of increasing the resolution further (e.g., as may be required for some future dataset) we have added one experiment using a higher resolution of 256 x 384 (i.e., SAVi HR). Our results show that the increased resolution improves performance slightly but is not essential.\n\n**“What is the main bottleneck in training time, given that the underlying datasets are not very large? Is it sequential time-steps for filling in the object slots?”**\n\nIn general we do not find that the current training time is prohibitive for any of the datasets we explore. Indeed, we can run on Waymo Open (one of the largest available labeled driving datasets with high quality LiDAR). More generally, the training time is mainly determined by the number of time-steps, and the number of object slots.\n\n**“it is noted that no PGT data is needed. Does using PGT data give improvements, and if so how much?”**\n\nThat is an interesting question for future work. As part of the rebuttal we have conducted an experiment with a supervised version of SAVi++ w/ actual GT (similar to TrackFormer), which resulted in 1.2 ± 0.0 % CoM and 68 ± 0.3 % BBox mIoU (for two seeds) compared to 4.4 ± 0.0 % and 50.5 ± 0.3 % for SAVi++. We expect these supervised numbers to be an upper bound for what amount of improvement is possible using PGT.\n\n**“Some more discussions on where the models are likely to fail such as low-data domain, as evidenced by L25 of suppl, could be useful.”**\n\nThank you for this suggestion. We will incorporate a discussion on this as part of the revised limitation section.", " Thank you for the review and comments on our paper. We understand that the reviewer is on the fence about the significance of our contribution and evaluation and we hope the detailed replies below and additional experiments summarized in the general response above can alleviate some of these concerns.\n\n**“L48 suggests extending previous work is more challenging but doesn't explain why.”**\n\nL47-48 states that “Initial efforts focused on single-frame, synthetic RGB images [13, 14, 35, 46], but extending this work to video and more complex scenes proved challenging.”, which is a slightly different point. We do not intend to claim that extending one work is more challenging than another, although the case for extending SAVi is straightforward: it is the SOTA end-to-end unsupervised learning approach to learning about objects in videos. Moreover, aside from the slot-based representations and competition among slots, it consists of a fairly standard architecture, which makes it a good starting point.\n\n\n**“L63 suggests it is the first model for segmentation without direct supervision or tracking supervision. This is strictly not true, given literature in domain adaption where one trains on synthetic segmentation dataset (such as Synthia or GTA) and domain adaptation transfer to Cityscapes (see [Ref1] for an example).”**\n\nL61-63 states that “SAVi++ is the first slot based, end-to-end trained model that successfully segments complex objects in naturalistic, real-world video sequences without using direct segmentation or tracking supervision.”. To the best of our knowledge this is an accurate statement. Notably, it excludes multi-stages approaches that do (supervised) segmentation pre-training, transfer learning, or domain adaptation, which is not the focus of this work. We would also like to emphasize that we are concerned with end-to-end unsupervised instance segmentation as opposed to supervised segmentation, which appears to be the focus of Ref1.\n\n**“Similar assertion is made in L3, but this ignores literature in video-text pretraining (see [Ref2] as example).”**\n\nL3 states that “Discovering this compositional structure in dynamic visual scenes has proven challenging for end-to-end computer vision approaches unless explicit instance-level supervision is provided.”, which is supported by numerous works. Note that we do not make any claims about the feasibility of (video-text) pretraining, transfer learning and domain adaptation in the unsupervised case and specifically focus on end-to-end approaches as considered here.\n\n**“The reason for using Resnet34 is unclear (L193). What metric is being used to decide this (\"more capable encoder\" is too vague).”**\n\nThank you for pointing this out. In our preliminary experiments we have considered a simpler CNN architecture (i.e. the default in the SAVi paper), the ResNet34 model (which was found to lead to better performance in the SAVi paper on their datasets) and a ResNet18 model (to strike a middle ground between these). On MOVi-C/D/E the ResNet34 performed best overall. \nWe will clarify how we arrived at this decision since we agree that “more capable encoder” is too vague.\n\n**“Why not use something like Vision-Transformer instead, and remove the additional transformer encoder? Design-wise that looks much simpler?”**\n\nThank you for pointing this out. Our intuition was to stay in line with recent state-of-the-art instance/panoptic segmentation models, a large majority of which use hybrid CNN+Transformer encoders, see Mask2Former (Cheng et al., CVPR 2022) and k-means Mask Transformer (Yu et al., ECCV 2022) for recent SOTA examples.\n\n**“MOVI is synthetic dataset, and Waymo dataset appears to be quite small with only 200 validation videos.More datasets should be tried such as Cityscapes, Cityscapes-VPS, VIPER (see [Ref3]).”**\n\nWe agree that the paper would be even stronger if SAVi++ could be shown to work on other real-world datasets, especially those having different characteristics such as VIPER. However, the diversity of Waymo Open videos should also not be underestimated as it concerns scenes recorded outside in the open that include lots of variation in scene backgrounds, clutter, and camera motion, unlike what is encountered in indoor scenes.\n\nMore importantly, prior object-decomposition approaches are mainly limited to synthetic datasets, which offer limited application domains. In that sense our results on Waymo Open demonstrate a true step toward end-to-end object-centric learning from real-world video, despite its size and object variability.\n\n\n", " **“The paper does not seem to have stated its limitations. The work has several major limitations, e.g. relying on depth information, far from being competitive with SOTA supervised models, and lacking experimental results on real-world videos.”**\n\nIn section 4.4 (“Limitations”, p. 9) we state a number of limitations, including those regarding bounding box conditioning in the first frame, temporal consistency, and the gap to existing supervised approaches.\nBased on your feedback and the comments of the other reviewers, we will substantially expand our discussion of limitations. See our general response above for a summary of additional points we will add to the limitation section.", " Thank you for the careful review and useful feedback. We provide a point-by-point response to the remaining comments and questions below.\n\n**“Despite the claims \"End-to-End Object-Centric Learning from Real-World Videos\", SAVi++ has very limited results on only one real world video dataset, Waymo. The results on Table 2 do not compare with any significant prior work. For more convincing experimental results, the authors are encouraged to consider indoor datasets as well, SUN RGB-D.”**\n\nThank you for pointing this out. Compared to real-world videos in the wild, Waymo Open is indeed relatively structured and certainly heavy on cars. Other datasets, such as SUN RGB-D offer greater complexity in that regard and it is foreseeable that further development of SAVi++ will be needed to truly support these. We will update the limitations section in the revised version of the paper to reflect this. Having said that, the diversity of Waymo Open videos should also not be underestimated as it concerns scenes recorded outside in the open that include lots of variation in scene backgrounds, clutter, and camera motion, unlike what is encountered in indoor scenes. We also note how prior object-decomposition approaches are mainly limited to synthetic datasets, which offer limited application domains. In that sense we argue that our results on Waymo Open are a true step *toward* end-to-end object-centric learning from real-world videos as the title indicates.\n\nRegarding the results in Table 2, we have added a quantitative comparison to two variants of the SAVi model (trained with RGB or Depth targets), which provides a better indication of the improvement of SAVi++. Our results below show how SAVi++ compares to these baselines.\n\n|Model |CoM(%) | B. mIoU(%)|\n|-----------------|-----------|-----------|\n|SAVi (RGB) |21.5 ± 1.8 |7.9 ± 0.9 |\n|SAVi (Depth) |17.5 ± 5.4 |21.7 ± 8.2 |\n|SAVi++ |4.4 ± 0.0 |50.5 ± 0.3 |\n\n\n**“The paper ignores a large body of work on monocular depth estimation, e.g. Monocular Depth Estimation Using Relative Depth Maps, CVPR, 2019.”**\n\nThank you for bringing this body of related work to our attention, we will update our discussion of related work to incorporate this. More generally, advances in monocular depth estimation architectures and losses only strengthen our work by providing alternative routes through which depth signals can be obtained. Here we opted for a simple depth estimation loss & architecture to stay closely to the SAVi model and preserve the ability to decompose video scenes into objects. For future work, we agree that it would be good to combine advances in depth estimation methods (such as relative depth map losses or architectural advances), which makes it valuable to discuss these related works.\n\n**“For Waymo dataset, the authors should compare with SOTA self-supervised instance segmentation methods, e.g. SAVi and GroupViT. The paper should also mention the performance of SOTA supervised methods to clarify on the gaps between these two type of methods.”**\n\nWe performed a comparison to a fully supervised tracking model similar to Trackformer (Meinhardt et al., 2022), which provides an indication of the gap to fully-supervised methods (supervised result reported for two seeds):\n\n\n|Model |CoM(%) | B. mIoU(%)|\n|------------------|-----------|-----------|\n|SAVi++ |4.4 ± 0.0 |50.5 ± 0.3 |\n|Supervised SAVi++ |1.2 ± 0.0 |68 ± 0.3 |\n\nAs mentioned in our previous comment (and the general response), we also carried out a comparison to SAVi to measure the improvement of SAVi++ on this dataset as well. We do not consider text-supervision or image datasets in our current benchmarks, which makes GroupViT not suitable for comparison.\n\n**“Can SAVi++'s depth prediction module benefit from the large body of monocular depth estimation paper?”**\n\nIndeed, as mentioned in our reply above, it is likely that SAVi++ can benefit from advances in monocular depth estimation. This is an orthogonal direction to our current contribution, which is why we leave this for future work. We will add a discussion to this point in the paper.\n\n**“It seems that the feature learning can use a strong baseline, e.g. Self-Supervised Representation Learning from Flow Equivariance, ICCV 2021.”**\n\nWe do not think that this baseline would contribute much, since our focus is on end-to-end learning without explicit segmentation supervision (as opposed to multi-stage transfer learning using a pretrained backbone and supervised fine-tuning).\n", " Thank you for the thorough review and useful feedback. Regarding the remaining questions about our work, please see our detailed point-by-point response below.\n\n**“Is there any reason why no metrics are reported for the unconditional settings?”**\n\nThe main reason for this was that our existing metrics require object slots to be matched to ground-truth boxes, which is more difficult to do in the unsupervised case. We have now conducted an evaluation of the unconditional setting using the Hungarian algorithm for matching using otherwise the same CoM metric as in the conditional case. Our results are as follows (lower is better):\n\n|Model |CoM(%) |\n|---------------|-----------|\n|SAVi++ |7.8 ± 0.8 |\n|SIMONe + depth |16.7 ± 3.4 |\n\n\n**“Compared to the baselines, the model requires additional supervision (the depth). The author did a great job in explaining how this signal is often available in the real world. However, experimenting with signals coming from predicted depth rather than ground-truth one would be interesting to explore.”**\n\nThank you for pointing this out. We agree that, while we have opened the door to application domains where depth signals are readily available, there are many real-world settings where depth may be more difficult to measure and one is limited to predicted depth instead.\n\nAn experiment with estimated depth is not feasible given the limited rebuttal time. However, we have experimented with “noisy” depth to gain some insights into the performance of SAVi++ when the depth target is suboptimal. We observe that SAVi++ yields good performance even when using additive Gaussian noise with a standard deviation of 40cm added to the depth targets.\n\n|Model |CoM(%) | B. mIoU(%)| |\n|--------------------|-----------|-----------|-------------------|\n|SAVi++ |4.4 ± 0.0 |50.5 ± 0.3 |3 seeds 500K steps |\n|SAVi++ (noise 10cm) |3.6 |49.89 |1 seed 300K steps |\n|SAVi++ (noise 40cm) |4.8 |48.35 |1 seed 300K steps |\n\n\n**“The results reported in Table 1 seem different from the ones reported in [16]. Is the setup in any way different from that one?”**\n\nIndeed, [16] reports results for the SAVi model using a simple CNN encoder. In our paper, we used a more capable baseline for comparison, which is the larger SAVi (ResNet) model variant from Kipf et al. (2022) that uses a ResNet encoder. We will clarify this in the paper.\n\n**“Since the slots are updated from one step to another, is the model able to “rediscover” an object if it leaves the scene and appears much later? Should we expect the model to attach that object to the original slot, or it will be attached to another random slot? Can this impact the final performance?”**\n\nThis is an interesting question, but difficult to measure quantitatively in our datasets as many objects that are leaving in Waymo Open usually do not re-appear. We have qualitative evidence that a re-appearing object (such as a car disappearing and reappearing during a turn) is detected later but with a different slot. We have also observed how slots initialized with bounding boxes corresponding to a disappearing object can take on newly appearing objects once the conditioned object has disappeared and it is “free”. \n\nAs to how this behavior will impact final performance, it is difficult to say. Clearly, when an object re-appears and it is modeled by a different slot, the quality of the decomposition will be suboptimal in the initial frames of appearance. More generally, there is a lot of headroom to improve the modeling of disappearing and reappearing objects, which is important to address in future work. Two possible approaches are to explicitly model object presence, e.g., as in SQAIR (Kosiorek et al., 2018), or by explicitly attending to past latent states, e.g., as in Slot-VPS (Zhou et al., 2022). We will comment on this in the paper.\n", " **“The paper would be much stronger if SAVi++ could be shown to work on one of these more diverse video datasets. Short of that, could the authors provide some evaluation of SAVI++'s performance broken down by category on the driving dataset?”**\n\nWe agree that the paper would be even stronger if SAVi++ could be shown to work on real-world videos “in the wild”. However, as you also pointed out, with current object-decomposition approaches being limited mainly to simple synthetic datasets, the step that our paper contributes towards real-world scenes is already an important contribution.\n\nCompared to real-world videos in the wild, Waymo Open is indeed relatively structured and certainly heavy on cars. DAVIS and Kinetics offer greater complexity in that regard and it is foreseeable that further development of SAVi++ will be needed to truly support these. We will update the limitations section in the paper to reflect this. Having said that, the diversity of Waymo Open videos should also not be underestimated as it concerns scenes recorded outside in the open that include lots of variation in scene backgrounds, clutter, and camera motion. In that sense, we argue that our results are a true step toward end-to-end object-centric learning from real-world videos as the title indicates.\n\nEvaluating SAVi++’s performance per category on the driving dataset is a great suggestion. We report mIoU and CoM metrics for the Pedestrian, Car and Cyclist categories below.\n\n| |Car |Person |Cyclist |\n|---------------|----------|------------|-------------|\n|Num. instances |15350 |2102 |275 |\n|CoM(%) |4.3 ± 0.0 |5.9 ± 0.2 |2.1 ± 0.1 |\n|B. mIoU (%) |53.4 ± 0.4|27.2 ± 0.5 |44.5 ± 2.4 |\n\nOur results indicate that cars indeed perform best but are closely followed by cyclists. The model performs worse on Pedestrians.\n\n**“The reliance on depth signals restricts the use cases of SAVi++ more than SAVi. [...] If the authors could show that sparse GT depth could be replaced with estimated depth (e.g. from stereo cameras, or a transfer from a pretrained monocular depth model) it would substantially relieve this restriction.”**\n\nWe tend to agree that the reliance on depth signals restricts the use case of SAVi++ more than SAVi. This has been the “cost” of scaling, and we will comment on this in the revised limitations section. At the same time, because of being able to scale, SAVi++ has opened the door to various application domains (such as in robotics) where LiDAR sensors for depth estimation are readily available.\n\nAn experiment with estimated depth is not feasible given the limited rebuttal time. Instead, we have conducted an experiment with “noisy” depth to provide some indication of the performance of SAVi++ when the depth target is not perfectly accurate (which is likely encountered when using estimated depth). We observe that SAVi++ yields good performance even when using additive Gaussian noise with a standard deviation of 40cm added to the depth targets.\n\n|Model |CoM(%) | B. mIoU(%)| |\n|--------------------|-----------|-----------|-------------------|\n|SAVi++ |4.4 ± 0.0 |50.5 ± 0.3 |3 seeds 500K steps |\n|SAVi++ (noise 10cm) |3.6 |49.89 |1 seed 300K steps |\n|SAVi++ (noise 40cm) |4.8 |48.35 |1 seed 300K steps |\n", " Thank you very much for the detailed review and useful feedback. Please find our point-by-point response below.\n\n**“The description of the literature on human grouping is a little misleading. It's not known whether perceptual grouping in humans is innate or not (L38).”**\n\nThank you for pointing this out. We agree that it is unclear whether perceptual grouping (mainly) develops through experience or is the result of some other kind of maturation process. Hence, we propose to modify the relevant part of line 38 as follows:\n\n\"...the ability to organize edges and surfaces into unitary, bounded, and persisting object representations develops through experience and/or maturation ...\"\n\nTo briefly clarify our understanding of the literature and what led us to write the original sentence in L38 in the first place:\n\n(1) The developmental literature indicates that while maturation may be necessary for certain perceptual abilities to kick in (e.g., binocular perception), it is not sufficient (Kellman & Arterberry, 2000, \"The cradle of knowledge: Development of perception in infancy\", MIT Press). \n\n(2) It is known that the natural environment provides cues that allow certain grouping principles to be learned (e.g., closure, Kim et al., 2021), and that adults rapidly learn new grouping cues (Zemel et al., 2002). \n\nAs you also pointed out, the literature on learned (self-supervised) approaches to object decomposition suggest that some kind of development through experience is certainly a possibility.\n\n**“[SAVi] substantially benefits from (and on challenging datasets, effectively requires) object \"hints\" on the first frame, and also estimates of optical flow. The latter are pretty easy to come by using pretrained models, but many large-scale video datasets do not have bounding boxes for all objects. Moreover, the authors' motivation for requiring these hints (L141-148) is pretty speculative -- I don't know of evidence that infants require people to point at objects in order to group them. Even if there were such evidence, it wouldn't answer the question of how an infant converts the pointing into something like spatial localization of objects. so the initial hints are a practical limitation, and it's likely that humans can get by without them.”**\n\nIndeed, SAVi benefits from object “hints” in the first frame, which the SAVi authors found was critical to scale from simple synthetic scenes containing simple geometric shapes to more complex synthetic scenes containing everyday real-world objects. In SAVi++, we have adopted this approach as well with the goal of scaling further to real-world visual scenes.\n\nWhile our main motivation for using object hints in the first frame is thus mainly a pragmatic one, we also felt that this setting shares some similarity to how human visual attention (and how humans parse a visual scene) can be directed via external signals, e.g., by pointing. We acknowledge that our current formulation doesn’t do a good job at communicating this point and may wrongly suggest that infants require people to point at objects to group them. We will update L141-148 accordingly.\n \nWe also acknowledge that the use of object hints in the first frame is a practical limitation, which is discussed in the limitations section. Maintaining the performance of SAVi++ without relying on such hints is an exciting direction for future work.\n\n**“There are some qualitative results of models trained without hints on the driving dataset, but I don't see quantification on any of them.”**\n\nIndeed, the main reason for this was that our existing metrics require object slots to be matched to ground-truth boxes, which is more difficult to do in the unsupervised case. To address this, we designed a version of the center of mass (CoM) distance metric that uses Hungarian matching. We obtained the following results (lower is better):\n\n|Model |CoM(%) |\n|---------------|-----------|\n|SAVi++ |7.8 ± 0.8 |\n|SIMONe + depth |16.7 ± 3.4 |", " **Other real-world datasets (Reviewers 3HCR, uKMC, GFoH)**\n\nReviewers have noted how our contribution would be even stronger if we provided results on other real-world datasets. The main reason for this was the concern that Waymo Open may not be entirely representative of videos “in the wild”, especially in terms of the diversity of objects. We agree with reviewers that still this is a special domain and other, in the wild, domains may require further work. To clarify this, we will add a discussion to this point to our limitation section.\n\nWhile we acknowledge the limitations of Waymo Open we also want to emphasize that the diversity of Waymo Open videos should not be underestimated as it concerns scenes recorded outside in the open that include lots of variation in scene backgrounds, clutter, and camera motion. In that sense, we argue that our results on Waymo represent a true step toward end-to-end object-centric learning from real-world videos as the title indicates. \n\n**Improved Limitation Section (Reviewers uKMC, GFoH)**\n\nIn response to reviewer feedback, we will expand our limitation section 4.4 to reflect our assumptions about target signals and object hints, model behavior when objects disappear, the gap to supervised methods, the limitations of Waymo Open in comparison to videos in the wild and potential failure modes such as encountered on MOVI-A/B.\n\nFor other points and questions raised by the reviewers, please see our response to individual points below each review.", " We would like to thank the reviewers for their thoughtful comments and positive feedback. We are pleased that the reviewers found that our contribution is significant and makes progress on a challenging problem (Reviewers 3HcR and FUXq), improves on SAVi (Reviewer uKMC), has clear motivation (Reviewer GFoH and 3HcR), has good visualizations (Reviewer GFoH), and has detailed and clear ablations (Reviewer FUXq).\n\nIn response to the feedback, we conducted new experiments and we will make changes to the paper to address the points raised by the reviewers. We summarize the main points and experiments below: \n\n**Quantitative metrics on the unconditional models (Reviewers 3HCR, FUXq)**\n\nTo quantitatively compare scene decomposition and tracking performance in the unconditional setting, we designed a version of the center of mass (CoM) distance metric that uses Hungarian matching. The matching algorithm finds the optimal assignment of ground-truth bounding box tracks to discovered segmentation mask tracks in order to compute the distance between their respective centers of mass. We obtain the following results (lower is better).\n\n|Model |CoM(%) |\n|---------------|-----------|\n|SAVi++ |7.8 ± 0.8 |\n|SIMONe + depth |16.7 ± 3.4 |\n\n**Training with estimated depth (Reviewers 3HCR, FUXq)**\n\nReviewers suggested investigating estimated depth as a target signal to increase the applicability of SAVi++ to other domains. While we were unable to consider a separate depth prediction model due to the short time frame of the rebuttal, we provide some insight into whether such a direction is likely to succeed. We conducted an experiment to evaluate if an accurate ground-truth depth signal is required for the success of SAVi++ on Waymo Open. We trained SAVi++ models with noisy depth targets instead of ground-truth depth (for which we used additive Gaussian noise with standard deviations of 10cm and 40cm). We found that the model was able to produce good segmentations and object tracks even at the highest considered noise scale of 40cm. See results below.\n\n|Model |CoM(%) | B. mIoU(%)| |\n|--------------------|-----------|-----------|-------------------|\n|SAVi++ |4.4 ± 0.0 |50.5 ± 0.3 |3 seeds 500K steps |\n|SAVi++ (noise 10cm) |3.6 |49.89 |1 seed 300K steps |\n|SAVi++ (noise 40cm) |4.8 |48.35 |1 seed 300K steps |\n\nThese results indicate that SAVi++ is capable of producing good segmentations and tracking performance despite the inaccurate target depth signal, opening the door for potential utilization of estimated rather than ground-truth depth signals.\n\n**Additional baselines (Reviewer uKMC)**\n\nWe evaluated two additional baselines: the original SAVi architecture (Kipf et al., ICLR 2022) and a fully-supervised SAVi++ model variant, which is similar to TrackFormer (Meinhardt et al., CVPR 2022). For the SAVi baseline (conditioned on bounding boxes in the first frame), we experimented with two settings: using RGB targets and using depth targets. We observed that SAVi++ performed significantly better than SAVi (see table below). \nFor the supervised baseline, we train SAVi++ without the decoder but instead directly supervise the bounding boxes to quantify the gap between SAVi++ and the supervised variant (see table below – *Supervised SAVi++* is reported with 2 seeds only [3rd seed TBD], all other results use 3 seeds).\n\n|Model |CoM(%) |B. mIoU(%) |\n|------------------|-----------|-----------|\n|SAVi (RGB) |21.5 ± 1.8 |7.9 ± 0.9 |\n|SAVi (Depth) |17.5 ± 5.4 |21.7 ± 8.2 |\n|SAVi++ |4.4 ± 0.0 |50.5 ± 0.3 |\n|------------------|-----------|-----------|\n|Supervised SAVi++ |1.2 ± 0.0 |68 ± 0.3 |\n\n\n**Evaluation on Waymo Open broken down by category (Reviewer 3HCR)**\n\nIn the following table, we break down the SAVi++ results from Table 2, per category, on the Waymo Open dataset. We find that cars dominate the videos but the model is also able to reliably track cyclists even though these are relatively rare. \n\n| |Car |Person |Cyclist |\n|---------------|----------|------------|-------------|\n|Num. instances |15350 |2102 |275 |\n|CoM(%) |4.3 ± 0.0 |5.9 ± 0.2 |2.1 ± 0.1 |\n|B. mIoU(%) |53.4 ± 0.4|27.2 ± 0.5 |44.5 ± 2.4 |", " SAVi++ is a successor the SAVi, model, which learns to segment objects in video sequences without segment annotations. Both SAVi and SAVi++ take \"hints\" about where objects might be on the first frame (such as bounding boxes, though they can be trained without such hints.) They both reconstruct optical flow via a bottleneck of object slots.\n\nSAVi showed several key limitations, in particular its poor performance on complex, realistic (or real) datasets and a general inability to segment static objects. SAVi++ makes progress on both of these fronts by using an additional cue / reconstruction target: depth, which may be supplied in addition to or instead of optical flow. Using depth as an additional cue, SAVi++ can learn to segment complex and realistic scenes better than its predecessor, especially static objects. Using sparse depth signals alone from a real-world autonomous driving dataset, SAVi++ can segment many of the objects despite never receiving ground truth segment annotations. Strengths:\n\n1. This paper makes progress on a challenging problem -- unsupervised segmentation of objects in realistic or real-world videos. This field has been limited mainly to simple synthetic datasets, so pushing toward real scenes is an important contribution.\n\n2. Both key differences between SAVi++ and SAVi are well-motivated: depth is a useful cue for segmentation, and it makes sense that more expressive architectures (and data augmentations) would be important for scaling to real data.\n\nWeaknesses:\n\n1. The description of the literature on human grouping is a little misleading. It's not known whether perceptual grouping in humans is innate or not (L38). Some ability to group moving objects is present as early as two months, and possibly earlier -- it's nearly impossible to measure. What is clear is that grouping ability **matures** from only segmenting moving objects to being able to segment static objects as well. Whether this maturation is a learning process that depends on experience is unknown too, but that much at least seems plausible via a kind of unsupervised learning (and has been recently modeled to some degree in https://openaccess.thecvf.com/content/CVPR2022/html/Bao_Discovering_Objects_That_Can_Move_CVPR_2022_paper.html and https://arxiv.org/abs/2110.06562 and https://arxiv.org/abs/2205.08515.)\n\n2. SAVI's main limitation is that it can't be trained (in its best form) on raw video datasets that contain nothing but RGB frames: it substantially benefits from (and on challenging datasets, effectively requires) object \"hints\" on the first frame, and also estimates of optical flow. The latter are pretty easy to come by using pretrained models, but many large-scale video datasets do not have bounding boxes for all objects. Moreover, the authors' motivation for requiring these hints (L141-148) is pretty speculative -- I don't know of evidence that infants require people to point at objects in order to group them. Even if there were such evidence, it wouldn't answer the question of how an infant converts the pointing into something like spatial localization of objects. so the initial hints are a practical limitation, and it's likely that humans can get by without them.\n\nSAVI++ seems to still rely on these hints. There are some qualitative results of models trained without hints on the driving dataset, but I don't see quantification on any of them. It's important to know if the hints are still critical for SAVI++'s success -- it is a valuable contribution either way, but doing away with the need for those hints is a challenging problem in its own right.\n\n3. The driving dataset looks like it's very heavy on cars. Being able to segment real scenes is a huge improvement over most earlier unsupervised models, but as far as video datasets go this one probably doesn't have the diversity of objects that others do (e.g. the DAVIS datasets or Kinetics.) The paper would be much stronger if SAVi++ could be shown to work on one of these more diverse video datasets. Short of that, could the authors provide some evaluation of SAVI++'s performance broken down by category on the driving dataset?\n\n4. The reliance on depth signals restricts the use cases of SAVi++ more than SAVi. With optical flow, there are supervised and unsupervised models (e.g. RAFT and SMURF) that transfer well to real-world video datasets, so reliance on flow isn't much of a limitation. But is that true for depth? If the authors could show that sparse GT depth could be replaced with estimated depth (e.g. from stereo cameras, or a transfer from a pretrained monocular depth model) it would substantially relieve this restriction.\n\n None Yes", " The paper proposes a model to improve on top of the SAVI model. The key differences include: utilising image depth as a signal for supervision and adopting scaling techniques during training to further boost the performance. The model obtains superior results on both a challenging synthetic dataset (MOVi dataset) and also on real-world driving videos (Waymo Open dataset).\n Pros:\n- With simple modifications, the model obtained clearly superior results compared to the SAVi model.\n- The ablation study is detailed and clearly highlights where the improvement comes from\n- Obtaining good performance on real-world video is a challenging, missing piece in video object-centric literature, and this paper brings good advancement in this direction.\n\n\nCons:\n- Even if the model produces strong results, the technical novelty is quite limited. Adding supervision from the depth signal and training techniques such as data augmentation were previously used on video data. However, I agree that clearly isolating the importance of adopting them by the object-centric community is a significant contribution.\n - Is there any reason why no metrics are reported for the unconditional settings? There are a couple of qualitative comparisons in the paper but no quantitative ones (for e.g. to compare against unconditional SAVi and SIMONe ). I expect the performance to degrade a lot, but I still consider it important to know if this is a scenario where it is worth applying the proposed model.\n\n- Compared to the baselines, the model requires additional supervision (the depth). The author did a great job in explaining how this signal is often available in the real world. However, experimenting with signals coming from predicted depth rather than ground-truth one would be interesting to explore.\n\n- The results reported in Table 1 seem different from the ones reported in [16]. Is the setup in any way different from that one?\n Since the slots are updated from one step to another, is the model able to “rediscover” an object if it leaves the scene and appears much later? Should we expect the model to attach that object to the original slot, or it will be attached to another random slot? Can this impact the final performance?", " Learning the compositional structure (e.g. scene graph) in dynamic visual scenes without labels (e.g. object masks) has proven challenging. Slot-based models leveraging motion cues have recently made progress in learning to represent, segment, and track objects without direct supervision. However, these methods only work on synthetic data and fails on complex real-world multi-object videos. This paper proposes to exploit depth signals to improve object-centric learning. It introduces SAVi++, an object-centric video model which is trained to predict depth signals from a slot-based video representation. The paper claims that by using sparse depth signals obtained from LiDAR, SAVi++ is able to learn emergent object segmentation and tracking from videos in the real-world Waymo Open dataset. Strengths\n\nSAVi++ improves on SAVi with predicting depth signals, and utilizing stronger visual backbone architectures in combination with data augmentation.\n\nSAVi++ evaluates on Waymo Open dataset.\n\nWeaknesses\n\nDespite the claims \"End-to-End Object-Centric Learning from Real-World Videos\", SAVi++ has very limited results on only one real world video dataset, Waymo. The results on Table 2 do not compare with any significant prior work. For more convincing experimental results, the authors are encouraged to consider indoor datasets as well, SUN RGB-D.\n\nThe paper ignores a large body of work on monocular depth estimation, e.g. Monocular Depth Estimation Using Relative Depth Maps, CVPR, 2019. \n\n For Waymo dataset, the authors should compare with SOTA self-supervised instance segmentation methods, e.g. SAVi and GroupViT. The paper should also mention the performance of SOTA supervised methods to clarify on the gaps between these two type of methods. \n\nCan SAVi++'s depth prediction module benefit from the large body of monocular depth estimation paper? \n\nIt seems that the feature learning can use a strong baseline, e.g. Self-Supervised Representation Learning from Flow Equivariance, ICCV 2021. The paper does not seem to have stated its limitations. The work has several major limitations, e.g. relying on depth information, far from being competitive with SOTA supervised models, and lacking experimental results on real-world videos. ", " Brief Summary: The paper proposes an extension of a previous work SAVI [30] which used optical flows as a training signal to further include depth maps obtained from RGB-D cameras / LiDAR. The main goal is to learn from videos which have depth-channels but no instance/semantic segmentation ground-truths. \n\nExperiments on MOVi dataset types C,D, and E which are created synthetically, as well as on Waymo dataset obtained from real world data, show their proposed method SAVI++ outperforms compared baselines. Pros:\n\n1. With ever increasing 3d input data information, it is imperative that models are able to utilize such information for semantic segmentation tasks. Further, 3d data enables learning about static objects. As such, there is a clear motivation for the problem at hand.\n\n2. The paper provides good visualizations such as Fig3, Fig5, as well those in supplementary material.\n\nCons:\n\n1. In my opinion, the paper makes too general claims without providing previous citations while their actual technical contributions and evidence is far more limited. For instance, L48 suggests extending previous work is more challenging but doesn't explain why. L63 suggests it is the first model for segmentation without direct supervision or tracking supervision. This is strictly not true, given literature in domain adaption where one trains on synthetic segmentation dataset (such as Synthia or GTA) and domain adaptation transfer to Cityscapes (see [Ref1] for an example). Similar assertion is made in L3, but this ignores literature in video-text pretraining (see [Ref2] as example). \n\n2. On novelty: the technical novelty in the paper is slightly incremental. given that it mainly suggests using 3d-depth map instead of flow maps used in previous work. \n\n3. On Model architecture: \n\n(i) The reason for using Resnet34 is unclear (L193). What metric is being used to decide this (\"more capable encoder\" is too vague). \n\n(ii) Why not use something like Vision-Transformer instead, and remove the additional transformer encoder? Design-wise that looks much simpler?\n\n4. On Experiments:\n\n(i) MOVI is synthetic dataset, and Waymo dataset appears to be quite small with only 200 validation videos. More datasets should be tried such as Cityscapes, Cityscapes-VPS, VIPER (see [Ref3]).\n\n(ii) The proposed method performs significantly worse on MOVI-A,B (as shown in Table 1 of supplementary material). If the reason is indeed due to overfitting on simpler domains as claimed in suppl L26, there should be some experiment with same encoder.\n\n(iii) The compared baselines in Table 1. are quite weak. The authors should compare their model with other methods trained on depth maps, at least some naive extensions of other previous works to use 3d-maps (such as for ODIN [19]).\n\n(iv) The authors should show fine-tune performance on other segmentation tasks such as coco and pascal. \n\n(v) Some design decisions on the choice of visual encoders should be empirically justified. \n\n[Ref1]: Hoyer, Lukas, Dengxin Dai, and Luc Van Gool. \"HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation.\" arXiv preprint arXiv:2204.13132 (2022). \n\n[Ref2]: Miech, A., Alayrac, J. B., Smaira, L., Laptev, I., Sivic, J., & Zisserman, A. (2020). End-to-end learning of visual representations from uncurated instructional videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9879-9889).\n\n[Ref3]: Kim, Dahun, Sanghyun Woo, Joon-Young Lee, and In So Kweon. \"Video panoptic segmentation.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9859-9868. 2020. Q1. How important is the high resolution in waymo dataset? \n\nQ2. What is the main bottleneck in training time, given that the underlying datasets are not very large? Is it sequential time-steps for filling in the object slots?\n\nQ3. In L100, it is noted that no PGT data is needed. Does using PGT data give improvements, and if so how much? Yes. Some more discussions on where the models are likely to fail such as low-data domain, as evidenced by L25 of suppl, could be useful. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 3 ]
[ "aqJYLX2V97n", "chFz6KmAN3w", "C2YjG_xK6w", "U9sJnnbyeYK", "aqJYLX2V97n", "Sbjqw-H1l6", "U9sJnnbyeYK", "OoSNH64uIB6", "y8LWgMxvfra", "cSZpU80SlJ6", "dBZjrMSEQYy", "chFz6KmAN3w", "nips_2022_fT9W53lLxNS", "nips_2022_fT9W53lLxNS", "nips_2022_fT9W53lLxNS", "nips_2022_fT9W53lLxNS", "nips_2022_fT9W53lLxNS" ]
nips_2022_mXP-qQcYCBN
AutoLink: Self-supervised Learning of Human Skeletons and Object Outlines by Linking Keypoints
Structured representations such as keypoints are widely used in pose transfer, conditional image generation, animation, and 3D reconstruction. However, their supervised learning requires expensive annotation for each target domain. We propose a self-supervised method that learns to disentangle object structure from the appearance with a graph of 2D keypoints linked by straight edges. Both the keypoint location and their pairwise edge weights are learned, given only a collection of images depicting the same object class. The resulting graph is interpretable, for example, AutoLink recovers the human skeleton topology when applied to images showing people. Our key ingredients are i) an encoder that predicts keypoint locations in an input image, ii) a shared graph as a latent variable that links the same pairs of keypoints in every image, iii) an intermediate edge map that combines the latent graph edge weights and keypoint locations in a soft, differentiable manner, and iv) an inpainting objective on randomly masked images. Although simpler, AutoLink outperforms existing self-supervised methods on the established keypoint and pose estimation benchmarks and paves the way for structure-conditioned generative models on more diverse datasets. Project website: https://xingzhehe.github.io/autolink/.
Accept
Building from works on unsupervised keypoint discovery for a domain of 2D images, this work proposes to jointly learn a skeletal structure that links discovered keypoints, and further proposes a novel image masking strategy for extracting limited background information, to force the keypoints to capture maximum information about the scene. The evaluations span a variety of datasets, with quantitative numbers on human face and body pose datasets, and show improvements from the proposed approach. A novel idea, executed well, and of interest to many at NeurIPS. Congratulations to the authors, and please fix visualization issues etc. before camera-ready / next revision.
train
[ "yLkmvohHq7", "oJGXx7cSFS", "i0mCanM_h_E", "5jrF8T0H7bj", "6GDhOao1pUh", "Ycb66c_5eaD", "oLOAK7gtEp", "0suLRjc60fT", "uGTUsVnK5Q-", "6EGmokjWiS8", "ahBXuUUp-i2", "rTpSZohx5mN" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am happy with the response and have no further concerns about this paper.\nI will keep my original rating.", " ### For Pascal VOC evaluation one could use the Pascal-part segmentation dataset?\nThank you for the valuable pointer. We had not worked with Pascal VOC before and concluded from the foreground-background evaluation in [33] that there are no part masks available. We will consider those additional part segmentation masks in the future.\n\n### Written confirmation from authors [33] on reported numbers in the paper are with unsupervised saliency, but the released code uses GT masks.\nWe corrected that misclassification (Table 3, marked in red) and now report both the original results from the paper (where available) and the publicly available version reported by [15] (to be able to report on the full CUB-all dataset). Thank you for inquiring on this aspect, we did not intend to misclassify their work. Please note that we already classified it as self-supervised/unsupervised in the related work. Moreover, the other two distinguishing factors still hold: 1) We perform significantly better than [33] on Tai-Chi and CelebA-wild. 2) Ours scales better to a large number of keypoints. \n", " Thanks authors for the clarifications and adding CUB results. The paper updates strengthen the paper overall. \n- For Pascal VOC evaluation, I think, one can use Pascal-part segmentation dataset?\n- For new CUB table, it is written that [33] uses GT silhouette supervision. I just cross-checked the work of SCOPS [33] which mentions that 'unsupervised saliency' is used (not GT silhouettes). There seem to be discrepancy with their code base. I wrote to the authors of [33] to get the clarity of their setting. They mention that the reported numbers in the paper are with unsupervised saliency, but the released code uses GT masks. Please update the table accordingly. As such the proposed method results on CUB are not that strong.", " I appreciate the authors for their response which addresses most of my concerns - I've updated the rating accordingly. ", " ### Using the masked image for appearance vs. extracting an appearance feature?\nIn the previous methods extracting appearance (e.g., [38]), these appearance features are not naturally disentangled from the pose (a feature encoding could store pose information). We believe this is the reason why they require manual geometric deformation or different frames in a video to disentangle. In our case, masking ~80% of pixels images removes the pose information very reliably. This reconstruction objective can be seen as a form of inpainting. As discussed in the related work section [6, 40, 51, 68, 101] indeed show that masking large regions removes structure such as pose. In addition, masking the image is a much simpler operation and therefore has a runtime advantage.", " ### How similar do the objects need to be?\nObjects only need to have the same topology. For example, humans have different appearances, different clothes, and different poses, but they have the same structure of a head, two arms, and two legs linked to the torso. To evaluate the limits, we already tested on AFHQ with animal faces of different species (incl. various dog breeds, domestic cats, lions, wolves, and foxes) trained together which differ significantly in texture, but they all have two eyes, two ears, a nose, and a mouth. Our model is able to learn this shared face structure for all animal faces. Furthermore, we added the experiments on CUB which includes various bird species (see Reviewer zetA).\n\n### Keypoints focus on the boundary, intentional? \nOur initial goal was to learn the sparse skeleton structure for humans, for which this approach indeed improves results significantly (see Figure 3 and 6, by using 10 and 16 keypoints and quantitative eval.). Only when using a large number of keypoints, they focus on the boundary and, as the model is quick to train, the desired number of keypoints can be selected easily for the task at hand. It was unintended yet we were happy with how well AutoLink scales to a large set of keypoints, still settling to semantically meaningful and consistent locations on the boundary of parts (e.g., [33] does not scale, see discussion above with reviewer zetA 2.). As our evaluation shows, sparse joint locations can also be recovered as a linear combination of the detailed results when desired, validating the semantic consistency. \n\n### Not sure if keypoints are meaningful for Flowers?\nFlowers are a boundary case. Their shape is highly symmetric, leading to unavoidable ambiguities, similar to the left-right ambiguity discussed earlier. Moreover, different flowers differ in the number of leaves, violating our assumption on a shared topology. Therefore, positioning keypoints consistently for the same flower type on the foreground can be seen as a success.\n\n### Discussion on unsupervised foreground segmentation methods?\nWe added the discussion of the unsupervised foreground segmentation, including [A], to the related work section. It is indeed an interesting direction to extend our model to segmentation. However, more work would be needed to compete with existing segmentation methods, including finding the pixel aligned object boundaries from our relatively rough linear segments and to scale to high number of edges (currently the edge matrix has quadratic complexity and our model is limited to 1024 edges formed between 32 keypoints on a single V100 GPU. \n\n### Aren't [39] and [B] simpler than AutoLink?\nWe argue that our method is simpler in terms of network architectures, training, and application to new domains:\n[39] does not only need videos to train but also weak supervision from unpaired ground truth skeleton examples. It also requires pre-training on the skeleton-to-keypoint generation network in addition to the image generator. Besides, it uses a discriminator with CycleGAN losses, which itself can be unstable. Ours has no GAN, a single loss, trained end-to-end, and uses a single encoder/decoder pair.\n[B] does not need edges but they 1) model each keypoint as an anisotropic Gaussian distribution, with the covariance implicitly modeling part orientation and length, 2) carefully chosen different sampling rate for different videos (see their supplemental), 3) use three different losses, where two of them are later added after the keypoints are stable during training, 4) use two parallel branches to process video frames.\nBy contrast, our method only has a single branch (note that the lower branch in our overview figure represents only the masking operation that is without learnable parameters and implemented in two lines of code) edges between keypoints are as easy to draw as anisotropic Gaussians, and training is on a single perceptual loss without the need to schedule the influence of multiple losses across training iterations.\nWe added [B] to our table. Thank you for the pointer.\n\n### What is a “structured background”\nWith \"structure\" we follow a similar definition as [115], describing it as \"stable local semantics across different object instances\". For our object representation it means that it can be represented with the same edge weights, i.e, the topology is shared across images. Hence, a background that has common features across images that are distinctive would be considered structured and modeled as part of the edge map. Note that our spectral clustering extension still enables separation of foreground and background even when both of these properties are fulfilled. For instance, the H3.6M dataset is captured in the same room (common features) with high contrast between floor and walls (distinctive). ", " ### Does the model only work on 2D objects?\nHuman3.6m and Taichi have highly articulated and large 3D view variations with respect to the 3D human object. Our model outperforms all existing unsupervised single-frame 2D pose estimation methods on these datasets. As we explained with examples in our limitation section, the left-right and front-back ambiguity is also a problem for all previous unsupervised 2D keypoint detectors, even when trained on videos [87], templates [85], or unpaired ground truth [39]. We argue that our method applies to 3D objects but learns to model only the visible side and in this regard has the same limitations as existing methods. Deriving a self-supervised 3D detector that models occlusions could be attempted as a post-process (e.g., using recently developed unsupervised 2D-3D lifting approaches [A]) or by extending this model to 3D. Neither direction is straightforward, e.g., [A] works merely on GT 2D poses and struggles when applied to estimated 2D poses, even when the 2D pose estimator was trained in a fully supervised way. \n\n[A] Wandt, B., Little, J.J. and Rhodin, H., 2022. ElePose: Unsupervised 3D Human Pose Estimation by Predicting Camera Elevation and Learning Normalizing Flows on 2D Poses. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6635-6645).\n\n\n### Additional comparison, e.g., to [33] on Pascal-VOC. \nA direct comparison on Pascal-VOC is not possible as their evaluation is only on foreground segmentation, not part localization. The added comparison on CUB (Table 3 revised paper, and discussion below) shows that the co-segmentation in [33] solves some of the left-right ambiguity resulting in higher scores on the seabirds where left-right matters. However, our method scores higher on the full CUB dataset, H3.6M, and CelebA (see Table 1, 2, 3). Furthermore, note that the sweet spots differ for both methods. On CUB they chose to use manually annotated silhouette maps to weakly supervise and their performance does not scale well to more than eight parts (as they mentioned in their supplemental) while ours remains fully self-supervised and scales to a large number of keypoints (we tried up to 32) yielding finer details. \n\n### Difficult to assess the model with quantitative results limited to face and human?\nFaces and humans are the most commonly used domains for quantitatively testing in unsupervised 2D keypoint detection methods [30, 31, 38, 39, 59, 93, 97, 106, 85, 115]. We tested humans in different conditions (e.g., outdoor tai-chi and indoor humans on H3.6M, with and without segmented background), and included animal portraits additionally. Only a few methods [59,31,115] evaluate on CUB birds and merely on a curated version where images are mirrored so the birds point in the same direction. We added an experiment on CUB birds (new Table 3) with two protocols, one aligned and one unaligned. In the simple case when direction is aligned, all methods perform well and achieve comparable results. In the unaligned case our model outperforms all the unsupervised methods and is comparable with those using GT foreground masks [33]. \n\n### Can saliency maps help?\nWe believe it is a valid idea that could possibly improve the results (see discussion of [33] above) but it is opposite to our current focus on unsupervised learning without any annotations (e.g., [33] uses GT segmentation masks as saliency maps on CUB, see their official Github [https://github.com/NVlabs/SCOPS]). We would like to explore it in our future work.\n\n### Can the model fit a pre-defined skeleton template?\nFurther constraints would be required, such as [85] using temporal data and various losses, e.g., anchor-point loss and boundary loss, and [39] requiring hundreds of example poses in a more complex cycleGAN framework. Please note that our focus is opposed to this, on learning the structure and succeeding without manually created models. Nevertheless, it is an interesting avenue for future work to integrate such a template without additional restrictions. We briefly tried to fix the connectivity in our model, but convergence to the expected body part assignments was unreliable and depended on the initialization. We believe a soft constraint towards a graph that is isomorphic to the desired template graph could succeed, but quantifying isomorphism is difficult in itself.\n\n### Corrections and suggestions\nWe added the intuition in Section 3.1 of how the edges are drawn (Gaussians extended along the edge). We enlarged the font size in Figure 4 and 5 for better readability. We corrected the missing opening bracket in Eq1, the missing 'equal' sign in Eq3, and the plural typo in line 264. Thank you for the detailed read.\n", " ### Is the concentration constraint [115] used for faster convergence?\n\nThe perceptual reconstruction loss is the only one we use. One of our advantages is that we do not need any such regularizer. Still, ours is very fast to converge (3h using a single V100) which we believe is as fast or faster than related methods, e.g. [87] takes 2 days using 2 TitanX. \n\n### More detailed training process?\n\nWe believe the provided training details in Section 3.3 are complete (incl. loss, optimizer, batch size, image resolution, training time, number of iterations, and learning rate). In addition, we provided the network architecture in the supplemental Figure 1. As for the two applications, pose transfer and conditional GAN, we put the details in the supplemental Section C. We will release our code including all of them upon acceptance. If there is any missing aspect, we would be happy to discuss it further and add more details.\n\n### Some figures in the paper are blurry when zoomed in?\n\nSince we provided hundreds of examples for each dataset, images are compressed with jpg. Is the overview figure lacking the detail? We double-checked and found the others at a reasonable trade-off between document size and quality. \n\n### Are horses and zebras trained jointly so that they share similar structures even if they have significantly different textures?\n\nEven though joint training on different animals is possible (see animal heads experiment), we trained zebras and horses separately in this experiment. The two models resulting in nearly the same positioning shows how robust the method is. Still, we will mention that this is only one example (an extreme case, we were surprised about how well it worked). We clarified this in the revised version.", " We thank all reviewers for their valuable time and detailed reviews. We address the open questions below, in response to each reviewer, including evaluating on an additional dataset as requested by zetA and aWFu. We also updated the paper with additional discussion on related work and all suggested improvements, including enlarging the text in Figure 4 and 5.\n\nDue to the 9-page limitation, we temporarily moved the ablation study details to the appendix which can be brought back with the additional page available for final versions.\n", " This paper proposed a simple yet effective framework for self-supervised learning of human skeletons and object outlines. An encoder is firstly used to regress the key points in a heatmap manner. Then a shared edge weight graph is generated by the keypoint predictions. An edge map is drawn based on the keypoint locations and edge weights. To get rid of the structure information in the image, the authors also randomly masked the original image as an appearance material to recover the input from edge maps. The model is then trained by the perceptual reconstruction loss. Experiments on human skeletons and other object outline datasets show promising results (robust key points and edge predictions). Strengths:\n1) This paper is well-written. The idea of this paper is novel and interesting.\n2) This paper successfully built a simple yet effective self-supervised learning framework. Different from previous self-supervised landmark prediction networks which normally enforce the key points to follow the same transformation as images, this paper constructed a shared graph and individual edge weights to reconstruct the image. The differentiable weights were inspired by the previous work on drawing and sketching, and it's very interesting to introduce them into this area for skeleton predictions.\n3) To avoid the graph bottleneck from modelling appearance, this paper additionally feeds the encoder with a marked input to enrich the required appearance for reconstructing images.\n4) The experimental results of this paper are impressive. By using linear regression to regress predicted facial key points to ground truths, the model achieved a very promising NRMSE result. On other additional datasets, the model is also able to predict very satisfying skeletons and key points, which cater to human perceptions. In the provided videos, the predictions are also stable in continuous frames. The predicted edge weight, which is shared in a specific task, can also well model the dependency of different key points.\n5) Interesting ablation study on the number of key points and edge thickness was given.\n\nWeaknesses:\n\n1) The training process of this paper should be described in more detail in the main paper. For example, do you use a concentration constraint as in [115] to help learn heatmaps? Otherwise, the keypoint detector seems to be slow to converge.\n2) Some figures in the paper are blurry when zoomed in (especially for the skeletons), which should be improved.\n3) The authors argued that texture doesn't matter as predictions for zebra are not affected by the stripes. However, this can be because the zebra is jointly trained with horses. What if the model is only trained with objects with significant textures? As in the Weaknesses. Yes. The method has limitations when the background has structures or the object is occluded.", " This work proposes a new self-supervised landmark estimation technique from image collections without any GT annotations. In contrast the existing works the discovers spatially independent landmarks, this work proposes to reason about connections/edges between the landmarks (skeletons). The core of the technique is auto-encoder with the estimated edge maps and masked images as bottleneck layers forcing the network to disentangle structure information into the learned keypoints and edges. Experiments on humans and face datasets demonstrate better results compared to existing landmark and part discovery works. Visual results demonstrate generalization of the technique to other object types (although the quality of results is not clear from the visuals on these datasets). Strengths:\n- A simple yet effective technique for landmark discovery.\n- The use of simple masking strategy to bottleneck the structure information in an image instead of learning a separate appearance encoder.\n- The use differentiable edge estimation to represent structure with keypoint edges instead of just keypoints.\n- State-of-the-art results on face and human datasets.\n\nWeaknesses:\n- Much of the results are shown on 2D-like object images with relatively constrained viewpoint variations. There are no quantitative results on more 3D like object datasets such as Pascal-VOC where there are 3D objects such as cars, animals from different viewpoints. For instance, existing works (e.g. [33]) report results on pascal-voc parts dataset. Does the proposed approach fail on such datasets with more 3D viewpoint variations? Paper claims the technique works well on all types of objects, but metrics are limited to only certain type of 2D objects.\n- Quantitative metrics are only shown on humans and face datasets. It is difficult to assess the quality of the discovered skeletons on other object datasets (such as zebras, hands) just from visuals. Also, the visuals in Figure-1 only shows horses and zebras from very similar viewpoints.\n Experiments to further strengthen the paper:\n- Did authors try using unsupervised saliency to make the network focus more on foreground regions? Does the improve the results?\n- If we assume a generic object skeleton is given (i.e., edge weights are given), can the proposed approach learn to attach the given skeleton to the right locations in the images?\n\nSuggestions:\n- It is better to give an intuitive explanation of edge computation (eq.3) so that authors need to look into [66] to understand this part o the paper.\n- Font sizes is too small in figures-4 and 5. Increase it for better readability.\n\nMinor corrections:\n- Eq.1: Missing opening bracket in the denominator.\n- Eq.3: Missing 'equal' sign.\n- line 264: 'a masked images' -> 'masked images'. Several limitations are discussed in the paper. It is better to also add a discussion on how much camera viewpoint variations does the proposed approach can handle across the image collection.", " This paper proposes to learn 2D keypoints from images, where the keypoints are linked by straight edges, from images of the same object class. The method is trained by reconstructing the original input image using masked input images and the learned edge map. The edge weights are learned and are shared across all instances in the dataset. Quantitative evaluations focus on datasets with human poses such as Human3.6M and CelebA, and qualitative results are on datasets such as Zebra, Horses, and Flowers. Results show that the proposed edge modeling is important for learning keypoints across datasets. Strengths\n- To the best of my knowledge, the proposed method is new for learning keypoints in a self-supervised way with learned edge weights across keypoints.\n\n- The evaluation is on a range of datasets, with fairly extensive ablation studies of the method with a range of hyperparameters. \n\n- The presentation of the paper is clear, with lots of visualizations of the method, including discussions of failure cases and limitations.\n\nWeaknesses\n- The paper mentions that the training required a collection of images of the same object class (+ the graph edge weights are shared). Could the authors further discuss this requirement (perhaps in limitations in the main paper, if the class requirement is strict)? How similar do the objects need to be - would this method work on image datasets like the CUB dataset? \n\n- The keypoints visually seem to cover object edges a lot of the time - I'm not sure if the keypoints are covering meaningful locations of the image (ex: in the provided supplementary videos & especially for flowers in Figure 6 of the appendix). Maybe this is intentional by the authors - could this be clarified? Could the authors also discuss relation of this work to works on unsupervised foreground segmentation, since it looks like the edges are covering a lot of the foreground (ex: [A])?\n\n- The authors mention that their approach is simpler than alternatives (lines 13, 265) - however, there are simpler approaches for keypoint discovery that perform comparably on Human3.6M (ex: [39] [B] - both have fewer components than this model and doesn't need edges - they both need video though). Does the author mean that their method is simpler for image-based keypoint discovery? Also I think that Table 2 could be updated with [B] which is a self-supervised method that achieves 2.53 $\\pm$ 0.056 on Human3.6M. \n\n[A] Savarese et al., Information-Theoretic Segmentation by Inpainting Error Maximization, CVPR 2021\n\n[B] Sun et al., Self-Supervised Keypoint Discovery in Behavioral Videos, CVPR 2022 I listed some questions above in weaknesses.\n\nSome more questions:\n- Could the authors further clarify \"structured background\"? Is it when the dataset have all the same background, or is it when the background have straight / patterned edges?\n\n- What is the effect of using the masked image for appearance vs. extracting an appearance feature from the image as in [38]? Yes the authors included limitations and potential negative societal impacts." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "0suLRjc60fT", "i0mCanM_h_E", "oLOAK7gtEp", "6GDhOao1pUh", "Ycb66c_5eaD", "rTpSZohx5mN", "ahBXuUUp-i2", "6EGmokjWiS8", "nips_2022_mXP-qQcYCBN", "nips_2022_mXP-qQcYCBN", "nips_2022_mXP-qQcYCBN", "nips_2022_mXP-qQcYCBN" ]
nips_2022_f3zNgKga_ep
Video Diffusion Models
Generating temporally coherent high fidelity video is an important milestone in generative modeling research. We make progress towards this milestone by proposing a diffusion model for video generation that shows very promising initial results. Our model is a natural extension of the standard image diffusion architecture, and it enables jointly training from image and video data, which we find to reduce the variance of minibatch gradients and speed up optimization. To generate long and higher resolution videos we introduce a new conditional sampling technique for spatial and temporal video extension that performs better than previously proposed methods. We present the first results on a large text-conditioned video generation task, as well as state-of-the-art results on established benchmarks for video prediction and unconditional video generation. Supplementary material is available at https://video-diffusion.github.io/.
Accept
This paper proposes a diffusion model for video capable of generating long and high-resolution videos. Diffusion models have generated some more excitement around generative models as well, so the paper is well-timed. The reviewers had a few concerns regarding additional experiments and clarifications, and it appears that the authors have satisfied those concerns. Overall, the reviews are positive, and there was a decent amount of interaction between the reviewers and authors, though much of the discussion was straightforward and didn't seem to require a good deal of discussion. I therefore recommend accept for the paper based on there being a clear consensus and the discussions and revision satisfying most outstanding concerns. Overall I have no recommendations of the reviewers based on this paper. The paper might have been very straightforward to read, and given the consensus I think that it is the case that this was a relatively easy review for everyone (neural reviewer score for everyone, though maybe due to paper being easy).
train
[ "nQnD9Vc8QV", "_7BAkJHfZE7", "7htANh6dWuQ", "dSibyDLkyKw", "czre6QHJmT", "lcVnNZPoZqA", "kTRdJcegaEt", "s8_mHAawBes", "hAUAZ1g2T7e" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response! \n1. I believe co-train/co-finetune is well studied in image and video recognition domain so directly extending it into video generation sounds a bit weak to me. Though, the guidance method sounds interesting and thanks the author[s] for the contribution.\n2. Yes, a more comprehensive background recap will be much more appreciated, how about adding it into Appendix?\n3. Thanks for the answer.\n\nI have updated my scores accordingly, Thanks.\n\n\n\n\n ", " Thank you for your review. We address your questions and concerns below:\n\n1. **Low resolution**\n\n Since video generation is computationally expensive, and since most well established benchmarks in the video generation literature only consider low resolution, we chose to focus on this setting for this paper. However, we will add a higher resolution (128x128) experiment for the UCF101 benchmark where there are some other works to compare against.\n\n2. **Comparing against recent works StyleGAN-V [CVPR22], DIGAN [ICLR22]**\n\n Thanks for making us aware of these. We will add them to the discussion and include their results on our considered benchmarks (only UCF101 it seems).\n\n3. **Why 3D-Unet**\n\n We experimented with models that used 3D convolution, or separable convolution across space and time. However, we found these models to not be an improvement over the separable space-time attention + spatial convolution that we ended up using. We also tried feedforward models outside the UNet family, such as ResNets and ViTs, but found their performance to be much worse than our UNet. There might well exist other architectures that do well on this task, but we have not identified them as of yet.\n\n4. **About diverse datasets**\n\n Of the publicly available datasets it seems that Kinetics is the most diverse, containing videos of people performing 600 different actions, which is one of the reasons we experimented on this dataset in particular. The private dataset we consider for text-conditional video generation is even more diverse, also including many scenes without people in them. Unfortunately the latter cannot be made public. Do you know any other diverse public datasets that we could consider?\n", " \nThank you for your review and your appreciation of our work. We briefly address the weaknesses you identified below:\n\n1. **More experiments needed for supporting the joint training using images and video**\n\n a. **It would be better to sample images outside of the video dataset**\n\n Yes, adding a more diverse set of images to our training data would help further improve our results. We hope to show this in future work, as it’s not part of the benchmarks / datasets we consider here. Our goal in this paper is to study joint training on images for a single video benchmark dataset without introducing confounding factors due to images from extra datasets.\n\n b. **Does joint training work for unconditional video generation?**\n\n Yes, we find that joint training also helps unconditional video modeling reach better results, with the advantage being greatest when training on smaller minibatches / larger datasets. We could add an additional ablation for this, e.g. by running UCF101 with and without joint training. Would that address your concern or are you looking for something different?\n\n2. **Video frame interpolation literature would be better discussed in the section on video extension**\n\n We will review our discussion of this literature and move the content to the video extension section where appropriate.\n", " Thank you for your review. We briefly address your questions below:\n\n1. **Novelty**\n\n Joint-training between images and video is not something we have seen before for the class of diffusion models. We will gladly add a reference if provided. We would also like to highlight our proposed guidance method for conditional sampling from unconditional models, which produces great results and seems quite different from earlier methods.\n\n2. **Writing**\n\n We have made some edits for the next version, and we can add more background material to make the paper more accessible to a wider audience. Is there any part of the writing specifically that we should have another look at before submitting the camera ready version?\n\n3. **Training time for reconstruction guidance**\n\n Our guidance technique for conditional sampling of unconditionally trained models is only applied at sampling time: Training is unconditional, so no additional training time is required.\n", " Thank you for your review. We briefly address your 4 identified weaknesses below:\n\n1. **128x128 vs 64x64 resolution**\n\n Working with video data is more computationally demanding than working with images. For this reason most of the popular benchmarks only consider low resolution. For UCF101 there also exist some works considering 128x128, as we discuss in the paper. For this benchmark, evaluation metrics are comparable between 64x64 and 128x128, since the C3D network used for evaluation does an internal resizing. (Simply resizing our 64x64 samples to 128x128 before evaluation will therefore not affect the internal representation of our samples). We’re therefore confident the method will work at that resolution also. If it addresses your concern, we can add an experiment at the 128x128 resolution for this benchmark.\n\n2. **Figure 1 dimensions**\n\n Figure 1 shows how our 4-dimensional video tensors are downsampled and upsampled in the spatial dimensions. Since the frame dimension is not downsampled, we do not denote the number of frames in this figure. We can make this explicit in the caption of the figure. \n\n3. **Necessity of space-time factorization**\n\n Space-time factorization is indeed essential in keeping memory requirements for parameters reasonable. In addition, a space-time factorized architecture more easily admits joint training on videos and images, which we found essential for good sample quality. This is because the factorized architecture lets us easily drop or bypass the layers that perform temporal mixing when training on independent images, and these layers have relatively few parameters. Performing an analogous dropping of parameters for 3D convolutions to bypass temporal mixing would waste many more parameters.\n\n4. **Performance without temporal attention**\n\n Without temporal attention all frames of the video would be generated independently. This is effectively how we model images in our joint training of video and image generation. The generated frames would still look good in this case, and the metrics that look at single frames (“FID-first” and “IS-first” in table 4) would not be affected. However, all metrics that look at the relationship between frames (all other reported metrics) would be much worse. As a baseline we tested outputting the same frame for the entire video for the video prediction benchmarks (BAIR and Kinetics) and found that the results are indeed bad. Outputting independent frames is presumably even worse on these metrics. \n", " The paper extends the image-based diffusion model methods into the video domain. It builds upon the 3D U-Net architecture with attentions along the the decoupled space and time dimensions. The proposed video diffusion model also enables text-conditioned video generation. They outperform state-of-the-art methods in different benchmarks for video prediction and unconditional video generation. They also show the joint image-video training and classifier-free guidance for video diffusion models. The proposed diffusion model can also perform on the long sequence via auto-regressive inference.\n Strengths\n1. The proposed 3D U-Net diffusion model achieves state-of-the-art performances on unconditional video modeling on UCF101, video prediction on BAIR Robot Pushing and Kinetics-600, and text-based video generation tasks.\n2. The proposed joint image and video training is effective.\n3. The paper starts with the background section which gives a good guidance to readers.\n----\n\nWeaknesses\n1. How does the proposed method work on 128x128 resolution results, compared to those of 64x64?\n2. In figure 1, where is the num-frames dimension denoted?\n3. Is the space-time factorization necessary? What would be the performance vs efficiency of using 3D convolution based U-Net compared to the proposed factorized one?\n4. How much performance drop will happen without the temporal attention, ie, per-frame method?\n * Is the space-time factorization necessary? What would be the performance vs efficiency of using 3D convolution based U-Net compared to the proposed factorized one?\n* How much performance drop will happen without the temporal attention, ie, per-frame method? The authors have adequately addressed the limitations and potential negative societal impact of their work.", " This paper shows that diffusion models also perform well for video modeling. By choosing a factorized attention module, it can directly extend a 2D U-Net to 3D space-time, allowing joint training with images possible. Results on unconditional text-to-video generation and video prediction show the strong performance of the proposed model. \n\n 1. A straightforward extension of [39] to videos with a factorized encoder, thus allowing joint training with images possible. It shows tricks like classifier-free guidance is also effective in video generation. \n\n2. Ablate different sampling methods in Table 6 shows that the proposed reconstruction-guided conditional sampling method performs the best. \n\n3. Results on UCF101, K600, and BAIR-RP in Table 1, 2, 3 are outstanding. \n 1. The major technical contribution seems to come from [39] and the novelty on videos such as factorized attention and co-train are extensively studied by previous works. Nevertheless, combining these components together are worth noting. \n\n2. The writing is more or less like a draft version, it makes people with less prior knowledge very hard to follow. \n\n3. In formula (8), How much extra training time does the reconstruction cost? Yes, the authors adequately addressed the limitations and potential negative societal impact.", " This paper adapts diffusion models for image generation to generate temporally coherent videos. The adaptation were done by (1) using a 3D U-Net diffusion model architecture to generate a fixed number of video frames; (2) using reconstruction guidance to generate temporally coherent longer videos. This paper claims that the proposed method generates SOTA results on benchmarks for unconditioned video generation and video prediction. The paper also claims that joint training using both images and videos improves the generated video quality. Strength: \n- The approach is a natural retention from diffusion models for image generation.\n- Results are great both in terms of visual quality and numerical results.\n\nWeaknesses:\n- The claim of joint training using images and videos is better supported with more experiments:\n1. As mentioned in the paper, images are better sampled outside of the video dataset.\n2. In the unconditional video generation task, do joint training help? \n- Video frame interpolation related literatures are better discussed in the sections of the video extension. Given the training hardware used, how long does training take on these tasks? Yes.", " The paper proposed new diffusion model for video generation. The proposed model use 3D-Unet as a backbone for the diffusion model. Furthermore, the paper introduce novel conditional sampling technique for spatial and temporal video generation. The proposed methods present on unconditional video modeling, video prediction, and text conditioned video generation. Moreover, the proposed method show large improvement comparing to the baselines. Strengths:\n- The paper introduce a novel diffusion model for video generation \n- The paper proposed two novel ideas to drive video X^b from sampled video X^a\n- The improvement in the results is relatively high.\n- The paper is well written and easy to follow.\n\n\n\nWeaknesses:\n- Results - looking on the generated video, the resolution is low. Can you trained the model for higher resolution? We all know that diffusion model is capable to generate high fidelity images, so it seems straight forward to use the super resolution approach like in CDM [cascade diffusion model] and get better video.\n- The paper should compare to other previous results. for example, StyleGAN-V [CVPR22], DIGAN [ICLR22]. It will be helpful to provide the reason why you choose to use the 3D-Unet. Are there any other architecture that can be also appropriate for this task?\n\nRegarding the dataset that chosen - Can you think about dataset that has more diverse content and interactions? for example, not only video that has repetitive movements. The author provide an section regarding potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, 5, 6, 9, 7 ]
[ -1, -1, -1, -1, -1, 3, 2, 5, 5 ]
[ "dSibyDLkyKw", "hAUAZ1g2T7e", "s8_mHAawBes", "kTRdJcegaEt", "lcVnNZPoZqA", "nips_2022_f3zNgKga_ep", "nips_2022_f3zNgKga_ep", "nips_2022_f3zNgKga_ep", "nips_2022_f3zNgKga_ep" ]
nips_2022_ZJe-XahpyBf
UDC: Unified DNAS for Compressible TinyML Models for Neural Processing Units
Deploying TinyML models on low-cost IoT hardware is very challenging, due to limited device memory capacity. Neural processing unit (NPU) hardware address the memory challenge by using model compression to exploit weight quantization and sparsity to fit more parameters in the same footprint. However, designing compressible neural networks (NNs) is challenging, as it expands the design space across which we must make balanced trade-offs. This paper demonstrates Unified DNAS for Compressible (UDC) NNs, which explores a large search space to generate state-of-the-art compressible NNs for NPU. ImageNet results show UDC networks are up to 3.35x smaller (iso-accuracy) or 6.25% more accurate (iso-model size) than previous work.
Accept
In this paper, the authors present a new way to obtain compressible neural networks to fit on resource-constrained NPU-based hardware. Initial reviews were mixed, but the authors successfully managed to respond to reviewers' concerns during the rebuttal period. Several reviewers pointed out clarity issues, but (1) some of these issues came from reviewers not reading the paper carefully enough, and (2) others were properly addressed by the authors. I also want to acknowledge that the most negative review is a short one, falling below NeurIPS quality standards. After discussion, all reviewers are leaning towards acceptance, agreeing that the paper successfully demonstrates the superiority of the proposed method vs. existing relevant baselines. As a result, I also recommend acceptance.
train
[ "eu4np2yyCNR", "HkcVUemK2Uc", "ThGfuxsDdlj", "wtKSixofaed", "WGs0Fw5kqK", "0KGklds9VVk", "tMAjRF7DBP6", "DnDe1vmpmUF", "vcpuK5qYbi", "LfOBDuLi1rS", "TsmjiWKhlgl", "klLT4YWJSFv" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Looking at the authors' response to my and other reviewers' comments. I am happy to see that my concerns are adequately addressed. I have updated my score.", " I apologize for missing Figure 4 in the main text. Figure 4 indeed justifies the authors' claims. I update my scores accordingly. I still think that the paper is a bit hard to follow, please try to make the Introduction section more clear. \n\n", " > Can you compare the searching costs with other work? For example, the GPU days and training FLOPs could be helpful in understanding this work when compared with others\n\nWe appreciate the reviewer’s comments about the relative runtime of UDC compared to other NAS algorithms. Part of the challenge in comparing algorithm runtimes is that the runtime generally depends on 3 things: 1) Software implementation quality and hardware platform, which jointly form the “system” that the algorithm runs on, 2) the algorithm itself, i.e. what the algorithm is actually doing during the search, 3) the number of epochs (or amount of data processed) for which the search is run. The challenge is that only 2-3 are algorithm dependent, whereas 1) depends on the code quality and the resources of the researcher (i.e. some researchers may be able to afford higher grade GPUs which exhibit much higher throughput compared to low-end GPUs). \n\nIn the paper, we tried to address the runtime question by saying that UDC runs at roughly half the speed (230 images / s) of a typical training experiment (460 images / s) on ImageNet. Although we did not state this in the paper, we also observed that UDC’s speed is nearly independent of the search space size in the ImageNet experiment. In other words, introducing quantization and unstructured pruning into the search does not lead to a slowdown compared to only searching for width. Likewise, increasing the number of options (i.e. how many distinct sparsity rates to consider) also does not lead to a slowdown. This suggests that UDC’s search speed is highly scalable with the search space size and that most of the slowdown can be accounted for by the system which the algorithm is running on. For that reason, we list the search cost of UDC in the last column of Table 3 as 3.2 GPUD normalized to a system running at 460 images / s.\n\nIn order to compare with other works, we now attempt to disentangle the 3 components that make up the search cost in the table below. We list the system and algorithm specific search speed, when available from the reference, the number of search epochs, the search cost in GPUD under the system used in the reference, and the search cost in GPUD under a common system assumed to be running at 460 images / s. For several references, the algorithm begins with a pretrained model, which we assume is trained for a standard 200 epochs. The table shows that in absolute terms (i.e. the system specific cost), UDC is 1.4x faster than FBNet / FBNetV2. When comparing approaches based on a normalized system running at 460 images / s, UDC is faster than all of the competing approaches other than FBNet / FBNetV2. Our hope is that the table below gives a rough sense of the relative search cost of UDC and the competing methods. \n\n| Algorithm | Images / s (system and algorithm specific) | search epochs (algorithm specific) | GPUD (system and algorithm specific) | GPUD normalized to 460 images / s (algorithm specific) |\n| --- | --- | --- | --- | --- |\nUDC |\t230 | 100 | 6.4 | 3.2 |\nFBNet |\t14.8 (calculated using details in reference: 11.5 million images, processed over 216 GPU hours) |\t90 (on 1/10 of ImageNet classes)\t| 9\t| 0.3 |\nFBNetV2 |\t14.8 (calculated using details in reference: 11.5 million images, processed over 216 GPU hours) |\t90 (on 1/10 of ImageNet classes)\t| 9 |\t0.3 |\nMCUNet\t| -- |\t450 |\t--\t| 14.5 |\nMCUNetV2 |\t-- |\t450 |\t--\t| 14.5 |\nGong et al. |\t-- |\t-- |\t-- |\t-- |\nChoi et al. |\t-- |\t200 + 12.5 |\t--\t| 6.8 |\nUhlich et al. |\t-- |\t200 + 50 |\t-- |\t8 |", " We thank the reviewer for providing feedback on our work. In the following, we address the reviewer’s questions.\n\n> For example, experiment setups and code examples are helpful, especially if no source code available\n\nThe detailed experimental setups are described in Appendices E-G. The complete algorithm pseudo-code is presented in Algorithms 1-2, line 220. We are happy to provide some additional pseudo-code in the revision, so as to make using our approach easier by others. \n\n> The results are useful but not significant. In some scenarios with strict limitations on model size, the proposed method could be used\n\nIn the TinyML setting, where models are deployed on resource constrained embedded hardware, model size is a hugely important factor because of the severely limited Flash storage capacity (see the references in lines 26-28). Therefore, taking full advantage of the model compression ability of modern NPUs offers a unique opportunity for deploying more accurate models on more efficient hardware. \n\n> What is the role of the Vela compiler in the proposed framework? Are the results on model size, e.g., in Fig. 4 & Fig. 5, from the use of the Vela compiler afterwards?\n\nAs we discuss in lines 272-274, we use Eq. (2) to measure compressed model size in all figures and for all competing methods, including UDC. Vela is not used in any of our results comparing UDC to existing NAS methods. We introduce the Vela compiler / compression engine as motivation for our work and as a real-world realization of a deployment scenario which uses model compression. The only place where we use Vela to report compressed model sizes is in Table 4, where we verify that the compression rate predicted by Eq. (2) is achievable by the Vela compiler (indeed Vela achieves even higher compression ratio compared to Eq. (2)).\n\n> What are the performance results, in terms of latency/inference time, of the searched, compressed, and deployed models? Setting the optimization goal to be smallest model size is reasonable if the model size matters the most. But what is the impact on inference performance? The work mentioned using unstructured pruning, would that hurt the performance? Please comment on that.”\n\nIn the use case we studied, decompression is implemented in the Ethos U55 hardware. Since all models pass through a decompression stage during inference, there is no penalty for executing compressed models (i.e. as generated by UDC) compared to off-the-shelf models on the Ethos U55 hardware (or any hardware which has HW decompression). Moreover, for a given model, weight compression leads to memory bandwidth savings, further compounding the benefit at inference time. Reducing model size allows for larger portions of weight stationary execution through compiler optimizations such as cascading (https://developer.arm.com/documentation/101888/0500/NPU-software-overview/NPU-software-tooling/The-Vela-compiler). These optimizations result in less data movement from external memory, and thus, less energy consumption and possibly lower latency.\n\nAs a side note, although our models are heavily pruned and therefore highly compressible by Vela, there is no computational benefit to this pruning since neither the hardware nor inference software stack are designed to take advantage of sparsity to save computation. As such, we don’t expect much of a latency improvement when comparing a given model to its pruned and quantized counterpart. With that being said, it is possible to design future TinyML hardware that takes advantage of model sparsity and more aggressive quantization to give good latency improvement as well.", " > It would be interesting to know how long it takes to converge the random search to the optimal design picked up by UDC (within reasonable time constraints of course).\n\nWe agree that it would be interesting to conduct this study, but we can already see that UDC achieves significantly better results compared to random search, using a small fraction of the trials. We did not have spare computational resources to extend the random search to multiple hundreds or thousands of trials, but we will try and generate these results for the revision.\n\n> Additionally, could authors share how they selected the memory size constraint values for the experiments shared in the paper\n\nIn practice, the model size constraints are determined by the Flash memory size of the deployment hardware platform. To be sure, 0.5-1.25 MB Flash sizes are fairly common for commodity hardware platforms (i.e. STM32F446RE, STM32F746ZG, STM32F767ZI) and are often used in research papers targeting deployment on constrained hardware platforms [1,2]. As such we targeted this range because it represents a reasonable, but extremely challenging deployment scenario.\n\n> is there some understanding if the superiority of UDC holds for larger/smaller mem constraints and if yes how a user could understand if they fall in that range.\n\nSince we only experimented with UDC in the very low to medium model size regime, we are hesitant to speculate how well UDC would perform in the very high model size regime. With that being said, we can see in Fig. 4a that UDC finds an ImageNet model with >72% accuracy while being 3.35x smaller than an APQ model at 4MB, which is considered a rather large model size in the context of resource constrained hardware deployment. This suggests that UDC does scale well when moving from very small to medium sized models. \n\n[1] Ji Lin, Wei-Ming Chen, Yujun Lin, Chuang Gan, Song Han, et al. \"Mcunet: Tiny deep learning on iot devices.\" Advances in Neural Information Processing Systems, 2020.\n\n[2] Colby Banbury, Chuteng Zhou, Igor Fedorov, Ramon Matas, Urmish Thakker, Dibakar Gope, Vijay Janapa Reddi, Matthew Mattina, and Paul Whatmough. \"Micronets: Neural network architectures for deploying tinyml applications on commodity microcontrollers.\" Proceedings of Machine Learning and Systems, 3, 2021.\n", " We thank the reviewer for the thorough review and feedback. In the following, we address the reviewer’s concerns point by point.\n\n> It is not obvious what these variables denote, ξ & ν, kindly add a description.\n\nThe symbols are already defined in the paper ($\\xi$ is defined in line 191 and $\\nu$ is defined in line 219), but we will make sure to highlight their meaning with further description in the revision.\n\n> Figure 4 illustrates the superiority of UDC in terms of accuracy and model size, could the authors also shed light on the runtime it takes to converge to the final NN design UDC picks compared to other approaches (e.g., MCUNet)\n\nWe appreciate the reviewer’s comments about the relative runtime of UDC compared to other NAS algorithms. Part of the challenge in comparing algorithm runtimes is that the runtime generally depends on 3 things: 1) Software implementation quality and hardware platform, which jointly form the “system” that the algorithm runs on, 2) the algorithm itself, i.e. what the algorithm is actually doing during the search, 3) the number of epochs (or amount of data processed) for which the search is run. The challenge is that only 2-3 are algorithm dependent, whereas 1) depends on the code quality and the resources of the researcher (i.e. some researchers may be able to afford higher grade GPUs which exhibit much higher throughput compared to low-end GPUs). \n\nIn the paper, we tried to address the runtime question by saying that UDC runs at roughly half the speed (230 images / s) of a typical training experiment (460 images / s) on ImageNet. Although we did not state this in the paper, we also observed that UDC’s speed is nearly independent of the search space size in the ImageNet experiment. In other words, introducing quantization and unstructured pruning into the search does not lead to a slowdown compared to only searching for width. Likewise, increasing the number of options (i.e. how many distinct sparsity rates to consider) also does not lead to a slowdown. This suggests that UDC’s search speed is highly scalable with the search space size and that most of the slowdown can be accounted for by the system which the algorithm is running on. For that reason, we list the search cost of UDC in the last column of Table 3 as 3.2 GPUD normalized to a system running at 460 images / s.\n\nIn order to compare with other works, we now attempt to disentangle the 3 components that make up the search cost in the table below. We list the system and algorithm specific search speed, when available from the reference, the number of search epochs, the search cost in GPUD under the system used in the reference, and the search cost in GPUD under a common system assumed to be running at 460 images / s. For several references, the algorithm begins with a pretrained model, which we assume is trained for a standard 200 epochs. The table shows that in absolute terms (i.e. the system specific cost), UDC is 1.4x faster than FBNet / FBNetV2. When comparing approaches based on a normalized system running at 460 images / s, UDC is faster than all of the competing approaches other than FBNet / FBNetV2. Our hope is that the table below gives a rough sense of the relative search cost of UDC and the competing methods.\n\n| Algorithm | Images / s (system and algorithm specific) | search epochs (algorithm specific) | GPUD (system and algorithm specific) | GPUD normalized to 460 images / s (algorithm specific) |\n| ----------- | ----------- | ----------- | ----------- | ----------- |\n| UDC | 230 | 100 | 6.4 | 3.2 | \n| FBNet | 14.8 (calculated using details in reference: 11.5 million images, processed over 216 GPU hours) | 90 (on 1/10 of ImageNet classes) | 9 | 0.3 |\n| FBNetV2 | 14.8 (calculated using details in reference: 11.5 million images, processed over 216 GPU hours) | 90 (on 1/10 of ImageNet classes) | 9 | 0.3 |\nMCUNet | -- | 450 | -- | 14.5 |\nMCUNetV2 | -- | 450 | -- | 14.5 |\nGong et al. | -- | -- | -- | -- |\nChoi et al. | -- | 200 + 12.5 | -- | 6.8 |\nUhlich et al. | -- | 200 + 50 | -- | 8\n\n> Also note that authors should be careful that while comparing to random search the resource utilization for UDC should be similar e.g., running equal number of parallel GPU instances for a fair comparison\n\nIndeed, this is a great point. Since we quantify search cost in terms of number of trials in Fig. 5b and each trial uses the same amount of GPU resources, the data does in fact provide a fair comparison between UDC and random search.", " We thank the reviewer for the feedback. In the following, we address the reviewer’s concerns point by point.\n\n> Make it clear that the target device is mobile-based NPU with limited model storage.\n\nWe tried to be as clear as possible about this point. The abstract states our problem setup as deployment onto memory constrained NPUs (lines 1-2). We state that our target hardware platform is a Flash memory-limited NPU (line 26). We give the problem statement in (P1), which makes clear that our goal is to design small models (in terms of memory cost) while exploiting NPU model compression (line 32).\n\n> Models targeting small NPU and MCU have stringent latency and memory utilization constraints, which are not discussed in this paper.\n\nOur goal was to explore the intersection of model compression and NAS, since hardware weight decompression is a new feature of modern NPUs like Arm Ethos U55. Model compression can have a significant impact in NPUs like Ethos: reducing model size allows for larger portions of weight stationary execution through compiler optimizations such as cascading (https://developer.arm.com/documentation/101888/0500/NPU-software-overview/NPU-software-tooling/The-Vela-compiler). These optimizations result in less data movement from external memory, and thus, less energy consumption and possibly lower latency. With that being said, we completely agree that real-world use cases must also consider latency and SRAM memory utilization and we can bring out this point in the paper revision.\n\n> Limited comparison with other NAS algorithms\n\nWe compared with 8 SOTA NAS methods in our ImageNet experiment (Fig. 4a), 5 SOTA methods in the CIFAR100 experiment in Fig. 4b, and 3 SOTA methods in our super resolution experiment (Fig. 4c). Counting the additional approaches we compared UDC to in the Appendix, we compared to 15 unique SOTA algorithms in total. Therefore, we disagree that our comparison to other NAS algorithms is limited in scope.\n\nOur goal was to compare with every single NAS algorithm that: 1) produces ImageNet scale results, 2) yields models under 1.5MB, 3) yields models that can be deployed with integer math (in other words, the criteria in Table 1). We are not aware of any other NAS algorithms satisfying constraints 1-3, other than the ones we have compared with. In fact, Appendix Fig. 7-8 provides a comparison to 3 algorithms which violate 3) and we show that, even in this case, UDC is pareto-dominant.\n\n> There are other NAS papers for MCU, please also cite and compare to them. For example: https://arxiv.org/abs/2010.14246\n\nWe will make sure to cite uNAS (the paper which the reviewer has referenced) as related work. At the same time, we do not see how a direct comparison between uNAS and UDC can be made. First, uNAS does not contain results for large scale datasets like ImageNet or medium-difficulty datasets like CIFAR100. Second, UDC yields results much faster: UDC finds ImageNet results in roughly 6.4 GPUD, while uNAS takes 23 GPUD to yield results on CIFAR-10. Third, UDC targets NPU deployment where compression plays a big factor, whereas uNAS targets MCUs running on traditional ARM M-class processors. Although it is not clear exactly which MCU uNAS targets, uNAS contains some experiments on NUCLEO-H743ZI2. This MCU runs on an ARM M7 processor, which does not support model compression.\n\n>What is the decompression overhead when the compressed model is executed on the target device?\n\nIn the use case we studied, decompression is implemented in the Ethos U55 hardware. Since all models pass through a decompression stage during inference, there is no penalty for executing compressed models (i.e. as generated by UDC) compared to off-the-shelf models on the Ethos U55 hardware (or any hardware which has HW decompression). Moreover, for a given model, weight compression leads to memory bandwidth savings, further compounding the benefit at inference time. As we mentioned above, reducing model size allows for larger portions of weight stationary execution through compiler optimizations such as cascading (https://developer.arm.com/documentation/101888/0500/NPU-software-overview/NPU-software-tooling/The-Vela-compiler). These optimizations result in less data movement from external memory, and thus, less energy consumption and possibly lower latency.\n", " Thank you for the feedback. In the following, we address your concerns point by point. \n\n> The authors claim that UDC outperforms prior work in terms of accuracy-model size but I cannot find the supporting experimental results in the main paper. The authors choose to present these main experimental results in the appendix, which I find a bit strange since it is hard to evaluate the paper without the experimental results. [...] Putting the main results in the appendix is another sign of a presentation problem\n\nOur main results are shown in Fig. 4, right above line 249 in the body of the main paper. We placed the numerical results used to generate Fig. 4 in the Appendix, but we emphasize that the underlying data is exactly the same. We presented the main results in figure form for a more pleasant reading experience and recorded the numerical results to allow for easier comparison to our work.\n\nIn addition, by avoiding presenting redundant information (i.e. the results in Fig. 4 in both graphical and numeric format), we were able to include additional figures that lend evidence to the merits of our method (i.e. the comparison to random search in Fig. 5a-b, the benefit of the novel number format in Fig. 5c, the ablation study in Table 3, and the illustration of deployment to NPU using the Vela compiler in Table 4). \n\n> Moreover, a brief description of the prior methods that the UDC framework is compared to such as HAQ, Gong et al. [27], McDonnel, [51], FBNet, MBNetV2, Choi et al., [18] from Table 5-6 in the appendix would be really helpful.\n\nWe highlight the main differences between UDC and existing methods in Table 1, with additional explanation about how UDC differs from MCUNet, Yang et al., APQ, Gong et al., Choi et al., and Uhlich et al. in Section 2.\n \nWe will make sure to add a more thorough explanation of existing approaches in the Appendix in the revision.\n", " The paper proposes a way of exploring compressible NNs across different architectures, sparsity levels, and quantization levels during training. The proposed framework is called Unified DNAS for Comppresssible (UDC) NNs and it can yield compressed NNs that show better accuracy-model size trade-offs than prior work. The paper tackles an important problem and the proposed solution seems reasonable. The authors claim that UDC outperforms prior work in terms of accuracy-model size and the experiments seem to support this. A brief description of the prior methods that the UDC framework is compared to such as HAQ, Gong et al. [27], McDonnel, [51], FBNet, MBNetV2, Choi et al., [18] would be really helpful for readers. \n\nIn my opinion, the paper is a bit hard to follow. Specifically, Introduction could be a lot clearer with a more structured presentation and perhaps with a few subsections. Please see the previous section. The authors discussed the limitations of their work.", " This paper presents a differentiable NAS approach targetting compressible NN for NPUs with low memory footprint.\nThe approach conducts a joint search over NN architecture, weight bitwidths and layer-wise weight sparsity levels to find models with the smallest model size. Strengths:\n- A theoretical lower-bound of the storage size that is cheap to be computed.\n- Clear three-stage training process to deal with quantization and sparsity.\n- Improved pareto-front of model size vs accuracy over previous work.\n- The search algorithm has better sample efficiency vs random method.\n\nWeaknesses:\n- Make it clear that the target device is mobile-based NPU with limited model storage.\n- Models targetting small NPU and MCU have strigent latency and memory utilization constraints, which are not discussed in this paper.\n- Limited comparison with other NAS algorithms.\n- There are other NAS papers for MCU, please also cite and compare to them. For example: https://arxiv.org/abs/2010.14246 - What is the decompression overhead when the compressed model is executed on the target device? The limitations are adequately discussed.", " The authors present am updated Differential Neural Architecture Search algorithm, UDC, which incorporates model compression features such as weight sparsity and quantization as part of the search space for the ideal (high accuracy, small model size) neural network (NN) model. In addition, keeping in mind the constraints typically faced by TinyML models, the authors also enforce a strict limit on the model size to ensure it can be implemented in resource constrained IoT applications. Further, the presented algorithm enables effective exploration of accuracy vs model size tradeoff. Lastly, the paper demonstrates the pareto dominance of NNs designed by UDC compared to prior art for a range of ML benchmarks and varying hardware constraints on Ethos U55 NPU. The paper presents a thorough comparison of UDC against past techniques that attempt similar co-optimization of ML models while accounting for hardware constraints. UDC tackles a wider range of constraints compared to all listed in prior art. \n\nSection 3 and 4 describe the details of the design model and the proposed DNAS algorithm. These sections were somewhat hard to follow, however, it is understandable considering the mathematical nature of problem. It is not obvious what these variables denote, ξ & ν, kindly add a description. \n\nSection 6 shows a comparative study of generating NNs using UDC and other prior art for state-of-the-art (SOTA) ML benchmarks under different memory size constraints. Figure 4 illustrates the superiority of UDC in terms of accuracy and model size, could the authors also shed light on the runtime it takes to converge to the final NN design UDC picks compared to other approaches (e.g., MCUNet). Considering UDC accounts for a larger set of constraints compared to other approaches it would be interesting if it can demonstrate comparable runtimes too. \n\nFigure 5b, compares UDC results to random search, and the authors do share some insight on UDC runtime compared to training a baseline network (~2x), if they could share this runtime information it would be handy. Also note that authors should be careful that while comparing to random search the resource utilization for UDC should be similar e.g., running equal number of parallel GPU instances for a fair comparison. It would be interesting to know how long it takes to converge the random search to the optimal design picked up by UDC (withing reasonable time constraints of course).\n \nAdditionally, could authors share how they selected the memory size constraint values for the experiments shared in the paper, is there some understanding if the superiority of UDC holds for larger/smaller mem constraints and if yes how a user could understand if they fall in that range. \n\nTo conclude, this work promises a very useful optimization tool for designing small but high accuracy NNs that can fit on typical IoT devices servicing TinyML applications.\n Refer to questions addressed in Strengths/Weakness section. Authors do not present any data showing the timing performance of UDC compared to other approaches, some perspective on that would be helpful. Also any explanation on how the memory size constraints were selected for presented experiments would be helpful. ", " This paper is on model compression (quantize and prune) with hardware constraints. The proposed search method is an extension of differentiable NAS to learn weight sparsity ratios and bit-widths per layer. Conceptually, this work combines previous ideas, such as DNAS (of layer width, depth, or operator) and learning of layer-wise bitwidth and sparsity, and yield pareto-optimal results on model size vs. accuracy. \n - The methods used in the paper are not new. But this work is a sort of combination of well-known techniques. Thus, making it work, i.e., DNAS for model architecture with pruning and quantization, is a new contribution. The related work part discusses the differences from prior work, and the related work is adequately cited to my knowledge.\n- The submission is technically sound with mostly empirical results. But it does not evaluate the weaknesses. For example, the searching costs of the proposed algorithm. \n- The writing is okay. The design decisions and the proposed algorithm are described with details. Since the major contribution, to my understanding, is the practical issues of making DNAS with compression techniques working, I think more details are necessary to help readers and practitioners to use the findings in this work. For example, experiment setups and code examples are helpful, especially if no source code available. \n- The results are useful but not significant. In some scenarios with strict limitations on model size, the proposed method could be used. The reported benefits (model size vs. accuracy) are not unique. - What is the role of the Vela compiler in the proposed framework? Are the results on model size, e.g., in Fig. 4 & Fig. 5, from the use of the Vela compiler afterwards? \n- What are the performance results, in terms of latency/inference time, of the searched, compressed, and deployed models? Setting the optimization goal to be smallest model size is reasonable if the model size matters the most. But what is the impact on inference performance? The work mentioned using unstructured pruning, would that hurt the performance? Please comment on that.\n- Can you compare the searching costs with other work? For example, the GPU days and training FLOPs could be helpful in understanding this work when compared with others. The authors have addressed the potential societal impact. \nThe limitations are not addressed enough. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "tMAjRF7DBP6", "DnDe1vmpmUF", "wtKSixofaed", "klLT4YWJSFv", "0KGklds9VVk", "TsmjiWKhlgl", "LfOBDuLi1rS", "vcpuK5qYbi", "nips_2022_ZJe-XahpyBf", "nips_2022_ZJe-XahpyBf", "nips_2022_ZJe-XahpyBf", "nips_2022_ZJe-XahpyBf" ]
nips_2022_vkGk2HI8oOP
Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias
It has become cognitive inertia to employ cross-entropy loss function in classification related tasks. In the untargeted attacks on graph structure, the gradients derived from the attack objective are the attacker's basis for evaluating a perturbation scheme. Previous methods use negative cross-entropy loss as the attack objective in attacking node-level classification models. However, the suitability of the cross-entropy function for constructing the untargeted attack objective has yet been discussed in previous works. This paper argues about the previous unreasonable attack objective from the perspective of budget allocation. We demonstrate theoretically and empirically that negative cross-entropy tends to produce more significant gradients from nodes with lower confidence in the labeled classes, even if the predicted classes of these nodes have been misled. To free up these inefficient attack budgets, we propose a simple attack model for untargeted attacks on graph structure based on a novel attack objective which generates unweighted gradients on graph structures that are not affected by the node confidence. By conducting experiments in gray-box poisoning attack scenarios, we demonstrate that a reasonable budget allocation can significantly improve the effectiveness of gradient-based edge perturbations without any extra hyper-parameter.
Accept
The authors study graph modification attack (through editing the edges) in the setting of untargeted poisoning and show that negative cross entropy is not a good candidate for the attack loss. Instead they propose a novel attack objective to study the problem. The reviewers found the topic timely and of interest to the community. They felt that the theoretical and empirical analysis could be improved to validate their claims, but overall the positives seemed to outweigh the negative perceived by the reviewers.
train
[ "dApOlZBBA-t", "R2SXzCfwge", "fIG7rhHMFSD", "r9HfYStbN7", "ynukfXIPmMX", "GuFwtWroIOo", "FtQrDa1Jl-2", "Si7LQjz4bLG", "1tfPnJMXTPv", "9Nlpqva0GY8", "cCRIjMCTWHc", "L505chbc8oE", "mf0Cfk-EY-1", "G2OLDLlcjX", "Dfr36_U5oKM", "UYDqGwdLDj7", "RS3lM2EfeqP", "ZxUXaK313sV", "NIHICjMgGgM", "IOMpEIEjm_j", "gz_ITAEdvni", "QQvQALeNA3g", "DrnrWkjru6r", "Wsle5sxb8Tx", "b7_BTEvZh7M", "bDH2W8aZ_W", "Imfyr8NILL8" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the late reply. \n\nFirst, my point isn't about injection or modification, instead, I am concerned about the theoretical part in the current version of the paper. The authors just show an interesting discovery, which however isn't shown to imply any **rigorous conclusions**. \n\nNo matter whether the adversary takes an injection strategy or not, the perturbed edges would influence the neighbors, is that correct? I list the recent works including both injection and modification, with the hope that the authors could use the tools to provide rigorous analysis of their interesting discoveries.\n\nHowever, the authors use a lot of intuitive descriptions but one can hardly justify the claims. Overall, the authors **didn't directly reply** to my two main concerns.\n\nAs for the response to the theoretical part, the authors use lots of intuitive descriptions without rigorous support:\n> However, based on the original/undebiased gradient, the attacker is likely to select an edge mainly because a node with low confidence level contributes a very significant gradient value on one of its nearby edges (edge state can be 0 or 1).\n\n> but that the attacker could have a more reasonable strategy leading to better attack performance. \n\n\nThe two points above are actually what I'd like to see the authors could provide formal justifications. \n\n\n**If without formal results**, the paper is expected to provide more empirical understandings, in other words, results at more perturbation rates and more defense models that could cover the literature, and provide more understanding of the discovery based on the empirical observations. However, the current empirical support is too limited to understanding the behavior (as I pointed out in the previous replies, less than Meta-attack that comes from ~3 years ago). \n**All of the concerning results are critical to evaluating the contributions and claims of the paper**. It's hard to evaluate the paper if the results are detained to the camera-ready phase. The defense models used in [2,3] have high coverage of the existing literature. However, the authors are suggested to refer to the *comprehensive collections provided by the authors of the surveys cited by the paper*. \n\n\nThe other two minor points:\n\nI use the authors' descriptions of RGCN to exemplify many of the *unsupported claims* of the authors. The authors always begin with an accuse of misunderstanding of me but can't provide direct answers. The authors could show more respect for the efforts of all reviewers during the rebuttal/discussion, by showing the evidence and direct answers. \n\nThe other claim of \"fair comparison\" seems to be another misconception of the authors. If we compare a baseline from many years ago, like a multi-layer perceptron for example, does \"fair comparison\" refer to we only need to compare the baselines in the multi-layer perceptron paper?\n ", " We thank the reviewer for the replys.\n\nReviewer may be more familiar with node injection attacks, as reviewer is concerned with pollution propagation through edges to a test node and its neighboring nodes.\nDue to the aggregation mechanism of GNN, changing one edge e_ij certainly affects more than one node (v_i, v_j and their neighboring nodes). However, based on the original/undebiased gradient, the attacker is likely to select an edge mainly because a node with low confidence level contributes a very significant gradient value on one of its nearby edges (edge state can be 0 or 1). 'Waste' does not mean that an edge perturbation is not able to reduce the performance of the victim model, but that the attacker could have a more reasonable strategy leading to better attack performance. The visualizations in Figure 3 and 4 are the empirical evidence. The methodology in the paper is talking about broader nodes. There are only three nodes in the example just making the example easy to understand.\n\nWe have acknowledged the value of [3] in our response, rather than considering it as an unreliable arxiv. The reviewers felt that we avoided testing on defense models. In the limited discussion period, we have provided some experiment on larger attack budget, RGCN and the white-box setting in the response to reviewer 2pFy. Reproducing the work of others takes time, and authors need to prepare other submissions before the near deadline. We have mentioned in our response that we plan to reproduce some other open-source defense models for testing. Does the reviewer suggest us to test on the defense model of [3]? If so, we will prioritize this job. If not, the reviewer can feel free to recommend specified reproducible defense models.\n\nGray-box attack setting has the clear definition that the victim model should be unknown, which is mentioned in the surveys cited in the review. We were not trying to dispute the reviewer's knowledge in this subfield, but to clarify the experimental scenario. Since white-box attacks, unlike gray-box settings, do not qualify for the study of transferable attacks, we spent words on clarification.", " About the literature review and experiments, I have already pointed out the surveys and a specific defense methods **accepted** in previous NeurIPS conference, but the authors keep avoiding providing more empirical support and finding excuses like \"we are used to not refer to arxiv.\". Please be more serious about YOUR RESEARCH.", " It's not me that misunderstands the authors' experiments. The authors described that \"Besides, we attack the defense model RGCN [5] (pytorch implementation from DeepRobust [6]). The results:\", so **HOW CAN I KNOW WHAT EXACTLY YOU ARE ATTACKING?**\n\nThe authors could be more serious about their claims, including the claims like \"misunderstanding\", \"misconception\" in the rebuttal, and their claims in the paper.", " Can the authors consider the influences of the gradients to broader nodes? The network contains more than 3 nodes. If the adversary follows the gradient direction pointed by the gradient from $v_i$, will it also affect the losses on neighbors of $v_i$ that have the same class as $v_i$? If so, how can the authors term such behavior a \"waste\", when without any evidence?", " 2. **How would the influences be changed by debiasing A_{vi}^{grad}?**\n\nIn the above example, we did not mention the confidence levels of **v_1**, **v_2** and **v_3**. We continue to assume that they have **confidence levels** of **0.1**, **0.5**, and **0.5**, respectively. Cross-entropy loss is suitable for classification tasks because it 'amplifies' the loss generated by low confidence nodes (which allows the classification model to preferentially fit the misclassified nodes). This is because the derivative of the cross-entropy function with respect to the node confidence **P** is P^{-1} (Equation 9-11). Different from classification tasks, the goal of this paper is to mislead the predicted class of test nodes. **Low-confidence nodes similar to v_1 are already more likely to be misleading**, so ''amplifying' the losses of nodes like *v_1* yield a waste of attack budget. Our proposed approach aims to solve this problem. \n\nLet’s get back to the example scenario. On edges **e_1** and **e_2**, the gradients of node *v_1* before it is ‘amplified’ (i.e., after debiased) are **0.05** and **0.03** ((0.5,0.3) × 0.1); the gradients of node *v_2* before it is ‘amplified’ are **0.15* and *0.05** ((0.3,0.1) × 0.5); the gradients of node *v_3* before it is ‘amplified’ are **-0.1** and **0.05** ((-0.2,0.1) × 0.5). From a global perspective, the total gradients generated by all three nodes on *e_1* and *e_2* are **0.1** and **0.13**. This means that changing the state of *e_2* is more likely to affect the model predictions globally.\n\nDoes this example help you understand the difference between the debiased one and the undebiased one?", " Thanks to the reviewer for the reply. \n\nThis may be our last response before the end of the discussion session. We gradually start to realize the reviewer's misunderstanding of the methodology and we try to help reviewers understand it using a simple example (it can be really helpful).\n****\n**About [2,3]:** We have found and read [3] and cited it in the paper, since it has been admitted by the KDD committee and we also recognize the significance of this work. We rarely refer to unpublished studies because they are not yet recognized by high-level conferences. In addition, they are often not open-sourced before they are accepted. As you know, there are lots of papers with irreproducible experimental results (some of them even provide fake results without open-sourced code), so we are used to not refer to arxiv.\n****\n**Misunderstanding: ‘yet the experiments seem to attack the RGCN directly.’** \nOur attack is transferred from a linear GCN (the surrogate model) to unknown GNNs (victim models). We are not attacking RGCNs directly. What the reviewers consider as directly attacks on RGCNs belong to *white-box attacks* rather than gray-box attacks. The difference between white-box and gray-box attacks includes if the attacker can get gradient from the victim model. \n**Regarding to the experiments:** We have expanded the perturbation rate to 10% and 20% and have provided the results in the previous response. We think this concern has been well-solved. Every experiment and visualization we provide in the experimental section is very important (especially the visualization) and they are sufficient to support the methodology. The experiments on more defense models than RGCN suggested by the reviewers may be expanded in the appendix rather than in the main body. We will try to reproduce some open-sourced defense models. If time allows, the complete results on multiple defense models may be expanded as a table in the Appendix.\n****\n**Misunderstanding (the most crucial): the reviewer has not yet understood this paper in the right perspective.**\n\n1. **What is its influence to the neighbors of vi from testset/targetset that have the same or different labels with vi?**\n\nThe reviewer's understanding of the method is biased. It is not an issue of how the loss of increasing v_i will affect its neighboring nodes. Reviewer should understand the structural gradient and this paper in terms of 'how changing an edge will affect its associated nodes'. Let us take the simplest example to try to make the reviewer understand. Suppose a scenario with three nodes **v_1**,**v_2** and **v_3** and two edges **e_1** and **e_2**.\n\nThe gradients generated by node *v_1* through attack loss (assume it is the undebiased loss) are **0.5** and **0.3** on *e_1* and *e_2*, respectively; the gradients generated by node *v_2* through attack loss are **0.3** and **0.1** on *e_1* and *e_2*, respectively; the gradients generated by node *v_3* through attack loss are **-0.2** and **0.1** on *e_1* and *e_2*, respectively.\n\nFrom a global perspective, the three nodes produce a total gradient of **0.6** and **0.5** on *e_1* and *e_2* (‘total’ or ‘average’ do not affect the comparison of *e_1* and *e_2*. ‘total’ is easier to recalculate by the reviewer). This means that changing the state of edge *e_1* is more likely to have a larger effect on the global performance (since the attack loss produces a more significant gradient on *e_1*).\n\nThe above scenario is just an example to better help the reviewer understand. In the real case, we also need to consider the state of the edge (0 or 1) and whether the gradient is positive or negative (details in Equation 7).\n\n**Follow-up by response (2)...**\n\n\n\n", " I thank the authors for the follow-up explanation. \n\nFrom the response of A1 to A2, the authors seem not to be aware of the graph's connectivity during the adversarial attack. By \"How would it influence the other nodes from the same/different classes as $v_i$?\", it refers to that, by adding the contribution of $A^{\\text{grad}}_{v_i}$ to $A^{\\text{grad}}$, it is expected to increase the loss of node $v_i$ by gradient ascend after the perturbation, Yes. But furthermore, \n- What is its influence to the neighbors of $v_i$ from testset/targetset that have the same or different labels with $v_i$? \n- How would the influences be changed by debiasing $A^{\\text{grad}}_{v_i}$?\n\nThe reason for considering the neighbors is because all of the target nodes will be considered when calculating the attack performance and these nodes could appear in the neighbors of $v_i$ and be influenced by the debiasing. I couldn't find any corresponding discussions in the paper. Hence it's unclear theoretical implications of the debiasing to other target nodes, and similar to the overall attack performances in theory. Therefore, as I asked in previous replies and my initial review, that the authors should provide more empirical analysis, yet the relevant experiments are severely lacking.\n\nRegarding A3, the authors are aware that \"Our aim is to obtain a transferable attack strategy for the victim model using the gradient information from the surrogate model.\", yet the experiments seem to attack the RGCN directly. Moreover, more perturbation rates and more defense models are needed to fully justify the improvements of the method (which has been already and repeatedly asked since my initial review).\n\nBy the way, both [2,3] have already been available since last year. [2] is accessible via openreview since Oct. 2021. For [3], please check out the arxiv number: https://arxiv.org/abs/2106.07767 . \n\nOverall, I'd also like to note that I also like the interesting discovery in the paper, and I believe **the paper would make a high impact to the community if the authors provide could provide in-depth theoretical and empirical insights behind the discovery.** ", " Thank you for your reply.\n\nBefore answering the three questions specified by the reviewer, we would like to do some clarification. \nFirst, a simplified setup, i.e., linearized GCN, is an empirically better-performing surrogate model. This is not the subject of this paper, but linearized GCNs do perform better than nonlinear GNNs as surrogate models. It is not saying that we consider the linearized GCN as an example victim model. The surrogate model and the victim model are different and do not need to be the same. Our aim is to obtain a transferable attack strategy for the victim model using the gradient information from the surrogate model. \nSecond, the partial gradient matrix is actually present in all gradient-based methods. However, those methods consider the final structural gradient as a whole rather than discussing the partial ones independently. \nFigure 3 (in Section 5.3) verifies the relationship between the partial gradient matrices of all test nodes and their confidence. This figure may be placed in the methodology section or referenced in the methodology so that it will help audiences to understand better.\n****\nQ1: How would it influence the other nodes from the same/different classes as vi? \nA1: The ‘influence’ depends on whether the nodes are connected rather than whether the nodes are in same/different classes. The features of node v_i are passed to its neighbors in the process of aggregation (message passing). For a k-layer GCN, the features of node v_i will be passed to its neighbors within k-hop. Thus, the loss of node v_i is able to generate gradients (via backpropagation) on the features of these neighbors, and on the edges between node v_i and these neighbors.\n****\nQ2: What is the difference comparing the attacks with debiased gradients to those with the undebiased/original gradients? \nA2: We actually discussed this issue in both the Introduction and Method, but we didn't explicitly name 'original' and 'debiased' gradient. Regarding original gradients, since nodes with low confidence levels tend to produce larger partial gradient on the graph structure, the attacker using original structural gradient is more likely to *perturb the edges involved in the message passing of low confidence nodes*. This leads to problems such as 1. attackers always try to make the confidence level of low confidence nodes lower, and thus the contribution of low confidence nodes to the final gradient becomes larger. 2. attackers tend to ignore nodes of relatively higher confidence levels, although they may be vulnerable (since aggregation will forcibly fuse the features of two connected nodes). Debiased gradients allow the attacker to better focus on the global vulnerable nodes compared to original gradients.\n****\nQ3: Are the performance gain observed in the attacks to normal GNNs traded from attack performances to the robust GNNs? \nA3: In our previous response, we added a set of experiments on the defense(robust) model RGCN. We chose RGCN because it is open-sourced in DeepRobust. We can reproduce the RGCN and show the results to the reviewers during the discussion session. The previously shown results demonstrate the effectiveness of our model for RGCN.\n****\nWe have recently read [2] mentioned by the reviewer. It's a interesting work that argues the tendency of node injection in adding edges between nodes with very different attributes. [3] seems to be published recently. It is interesting to relate edge perturbations to the change in homophily. We did not notice [2,3] before completing this article, as they were not published. We will discuss them in related work.\n\n\n\n\n\n", " I thank the authors for the explanation. \n\nTo make my doubts about the gap between theoretical analysis and experimental design in the paper clearer, I'd like to highlight the following:\n\nIn theory, the authors only analyze the contribution of a node to the final structural gradient matrix (under a simplified setup, i.e., linearized GNN, w/o considering $\\partial P_{v_i}/\\partial A$). However, no further implications of the biased/unbiased weighted gradients are discussed. To be more specific,\n\n- How would it influence the other nodes from the same/different classes as $v_i$? \n- What is the difference comparing the attacks with debiased gradients to those with the undebiased/original gradients? \n- Are the performance gain observed in the attacks to normal GNNs traded from attack performances to the robust GNNs?\n\nThere are tools available in the literature for the authors to investigate more about the influences to the neighbors, e.g., [1], [2] and [3] (The last two were already available 1 year ago).\n\n[1] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, Stefanie Jegelka. Representation Learning on Graphs with Jumping Knowledge Networks. ICML 2018.\n\n[2] Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng. Understanding and Improving Graph Injection Attack by Promoting Unnoticeability. ICLR 2022.\n\n[3] Jiong Zhu, Junchen Jin, Donald Loveland, Michael T Schaub, Danai Koutra. How does Heterophily Impact the Robustness of Graph Neural Networks? Theoretical Connections and Practical Implications. KDD ’22. ", " We focus on explaining the reviewers' doubts about the methodology in this response. \n\nIn our view, the reviewer's main concern is that our theoretical analysis scenario and the practical application scenario are inconsistent. This is a misconception. As you said, the final structural gradient matrix is composed of all the considered target nodes by averaging. Yes it is, and we are discussing the difference between target nodes in contributing to the final structural gradient matrix. we elaborated that all partial gradient matrices from target nodes are actually weighted by a weight associated with the confidence level. This leads to the fact that each node contributes differently to the final structural gradient. When we discuss the difference between the nodes, there is no way to consider all nodes as a whole. What we need to do is investigate to what extent each node is weighted. \n\nFor example, there are 3 target nodes v_1,v_2, and v_3 with confidence level 0.1,0.5,1 at their labeled classes. Then according to our analysis, the partial gradient matrices of v_1,v_2, and v_3 are weighted by 10, 2, 1, respectively. This makes the partial gradient matrix from v_1 dominant the final structural gradient matrix (because the final one is the average of the partial). The paper discusses that this phenomenon causes the attacker to focus on v_1 (because choosing the edge perturbation is based on the final structural gradient) and easily ignore the potential possibility of v_2 and v_3 being attacked.\n\nHope this will help you understand the methodology part better.\n", " I thank the authors for the follow-up reply.\n\n> We are analyzing the contribution of each node to the structural gradient in the whole structural gradient in relation to each node’s confidence level.\n\nFrom the authors' response, it's clear that the authors understand that the final structural gradient matrix is composed of all the considered target nodes by averaging. However, all of the main discoveries in this paper seem to be derived by analyzing the contribution of a specific node from a specific class. In other words, the paper didn't draw any theoretical conclusions concerning more general cases outside the focus of the contribution from a single node to support the main claims.\n\n> As we have repeatedly emphasized, many widely cited works in the field, such as [1] and [3], employ only 5% as the budget. Our baseline [4] also employs 3% as the budget for the attack, so we include this part of the experiment as well.\n\n(I'll use the authors' references.) First, as I mentioned, All of the baselines compared in the paper are from at least two years ago, including [1,3] when few defense methods were proposed at that time. As mentioned in the review and the referred surveys, many advanced defense methods have been developed during the two years. If the gain of the proposed method in this paper is traded from the threats to other defense/target models, it may weaken (otherwise strengthen) the significance of this paper.\n\nSecond, even those papers mentioned by the authors [1,3] conduct experiments at several perturbation rates and evaluate the transferability with several representative GNN methods. For example, the meta attack [1] from ICLR2020 (3 years ago) evaluates the 1%, 5%, 10%, 15%, 20% perturbation rates, and 3 representative GNN methods, which is far more than those conducted in the paper. \n\n> [1,2] include the losses I mentioned.\n\nThis response seems to be confusing. Did I mention any of them in the review?\n\nOverall, I also like the interesting discovery in the paper, but the authors could develop more in-depth theoretical and empirical understandings of the discovery, and make the presentation clearer and easier for the readers to follow. I believe the paper would make a high impact on the community if the authors provide more in-depth theoretical and empirical insights behind the discovery.\n\n\n\n", " Whether this paper is accepted or not, we will add the discussion with reviewers in the next version. Thank you for your suggestion!", " Thanks for the response, the raised clarification issues are well addressed by the authors. I also suggest the authors to include more discussion (mostly in your current response) regarding the previously raised concerns in the paper. ", " [1] Xu, Kaidi, et al. \"Topology attack and defense for graph neural networks: An optimization perspective.\" 28th International Joint Conference on Artificial Intelligence, IJCAI 2019. International Joint Conferences on Artificial Intelligence, 2019. \n[2] Geisler, Simon, et al. \"Robustness of graph neural networks at scale.\" Advances in Neural Information Processing Systems 34 (2021): 7637-7649. \n[3] Zügner, Daniel, and Stephan Günnemann. \"Adversarial attacks on graph neural networks via meta learning.\" arXiv preprint arXiv:1902.08412 (2019). \n[4] Lin, Xixun, et al. \"Exploratory adversarial attacks on graph neural networks.\" 2020 IEEE International Conference on Data Mining (ICDM). IEEE, 2020. \n[5] Zhu, Dingyuan, et al. \"Robust graph convolutional networks against adversarial attacks.\" Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 2019. \n[6] Li, Yaxin, et al. \"Deeprobust: A pytorch library for adversarial attacks and defenses.\" arXiv preprint arXiv:2005.06149 (2020). \n", " Thank you for your reply! Based on your response, we find that you have a big misunderstanding of the scenario covered in the methods section of this paper. We hope our reply will help you understand the methodology better.\n****\nQ1: To my understanding, the authors only analyze the most ideal case, where there is only one node and one class. \nA1: This paper never claimed that it discussed any ideal scenario, which is ‘a graph with one node and one class’. We try to sort out your possible misconceptions.\n1. Please note Figure 2 and Equation 6. The structural gradient A^{grad} is the gradient over the adjacent matrix generated by the loss function of all test nodes. Where the gradient generated by each node v_i is the partial gradient matrix A^{grad}_{vi}. Equation 6 shows that the average of the partial gradient matrices of the test nodes is equal to the structural gradient A^{grad}. This means that there is not just 'one node' in the scenario. We are analyzing the contribution of each node to the structural gradient in the whole structural gradient in relation to each node’s confidence level. \n2. The formula for CE loss is mentioned in Equation 9. We guess this formula makes you think we are considering only one class. First of all, we would like to remind you that the label is a one-hot distribution, which means that only the labelled class has a value of 1, while the other classes have a value of 0. Another thing that needs to be reminded of is that the purpose of the attack is to make the model's prediction of the node deviate from the original prediction (i.e., deviate from the pseudo-label class).\nThe meaning of the P_{v_i}(y_i) term in Equation 9 is the confidence level of the model's prediction on the pseudo-label class y_i. For example, if a classification task has 3 classes and the model predicts [0.1,0.2,0.7] (10% class 1; 20% class 2; 70% class 3) for a node v_k, and its pseudo-label y_k is the 3rd class, then the confidence level P_{v_i}(y_i)=0.7. We hope this makes you understand that there is more than one class in the scenario.\n3. We visualized the contribution of all nodes to the structural gradient in Figure 3. The horizontal coordinate is the confidence level of each node on the predicted class, and the vertical coordinate is the L2 norm of the node’s partial gradient matrix. This graph illustrates that nodes with lower confidence levels tend to produce more significant partial gradient matrices and thus contribute more to the structural gradient A^{grad}. Figure 3 may be able to help you understand this paper better. \n\nIf our response does not help you understand better, then you can describe more about the 'ideal scenario' you mentioned (including which part of the description made you think we are talking about the 'ideal scenario') so that we can figure out the misunderstanding.\n****\nQ2: Please give specific references to these \"losses\". \nA2: [1,2] include the losses I mentioned.\n****\nQ3: About some experiments you mentioned \nA3: As we have repeatedly emphasized, many widely cited works in the field, such as [1] and [3], employ only 5% as the budget. Our baseline [4] also employs 3% as the budget for the attack, so we include this part of the experiment as well. 'Budget allocation' is our way of explaining the problem and motivation, the essence of which is that we find that the contribution of nodes to the structural gradient is weighted by their confidence level. A visualization of this phenomenon can be found in Section 5.3. We believe that a higher attack budget setting would destroy the imperceptibility of the attack, but we still add some experiments to address your doubts. \nWe extend the comparison with the baseline method Meta-Self at budgets at 10% and 20% on Cora Dataset. All the perturbed graphs involved in the following experiment can be generated by the code we provided in the supplementary.\n\nWhen budget=0.1*edges:\n| Victim | GraD | Meta-Self |\n|-----------|-------|-----------|\n| GCN | 58.6% | 63.9% |\n| GraphSage | 65.1% | 66.0% |\n\nWhen budget=0.2*edges:\n| Victim | GraD | Meta-Self |\n|-----------|-------|-----------|\n| GCN | 32.8% | 41.0% |\n| GraphSage | 51.9% | 55.3% |\n\nBesides, we attack the defense model RGCN [5] (pytorch implementation from DeepRobust [6]). The results:\n\n| Budget | GraD | Meta-Self |\n|--------|-------|-----------|\n| 0.05 | 68.5% | 69.5% |\n| 0.1 | 60.7% | 63.8% |\n| 0.2 | 45.1% | 49.1% |\n\nYou seem to think that target model and victim model are different, but as far as we understand they have the same meaning. We hope that the results we provided will resolve the your confusion about experiments.\n\n\n\n", " I thank the authors for the follow-up discussion. However, if the authors don't provide further evidence to support their claims (made in the previous two responses), it's hard to convince the readers. Here I just list some of those claims in the last response:\n\n> This phenomenon leads to the result that the attacker almost ignores the potential vulnerability of nodes with other confidence levels.\n\nWhy? To my understanding, the authors only analyze the most ideal case, where there is only one node and one class. Throughout the paper, I didn't see any other theoretical conclusions/proofs.\n\n> The losses mentioned by the reviewers is based on ‘margin’...Besides, those papers consider that attackers...\n\nPlease give specific references to these \"losses\".\n\n> The discovery of this paper is a counter-commonsense phenomenon that the attacker should focus on nodes at all confidence levels equally rather than only on nodes at low confidence levels (i.e., CE losses are not good for all classification-related tasks).\n\nI agree that the discovery in this paper is interesting. However, the results seem to only be limited to an ideal scenario (theoretically) and a specific attack (empirically), since the authors didn't provide further analysis. Furthermore, \"CE losses are not good for all classification-related tasks\" seems to be an overclaim. Can the authors provide more justifications beyond the toy example analyzed in the paper? (Besides, all of the notations in the paper could be improved. In the review, I only list a few. However, the authors could check with all of the superscripts and subscripts, e.g., $y_i$ and $y_k$.)\n\nAfter reading the authors' response about the motivation, it seems they didn't answer my question. What is the main problem this paper aims to address? To my understanding of the authors' meaning, it's to make the budget allocation better in the attacks that adopt the CE loss. If so, since the authors didn't provide more theoretical justifications than the toy example, the readers would like to see more empirical support, which is about the experiments.\n\n> The budget in the targeted attack problem is sufficient and the number of target nodes are small, so there is no need to consider the budget allocation problem in targeted attack.\n\nIf this paper aims for a more reasonable budget allocation in the attacks, then shouldn't this paper provide more experimental results with different budgets and when the budget is insufficient to better support the claims? If the authors claim that all of the attacks using CE loss would suffer from this issue, shouldn't more empirical or theoretical supports need to be provided?\n\n> Transferability & I still insist that our experiments are consistent with the experimental setup of the compared baseline attackers, for a fair comparison, as these methods are studying the same scenario. If you read all the attack defense articles in the field, then you can find a very large number of experimental setups. Experiments exist in every article that have not been done in other articles.\n\nCould the authors provide more evidential support? All of the baselines compared in the paper are from at least two years ago when few defense methods were proposed at that time. As mentioned in the review and the referred surveys, many advanced defense methods have been developed during the two years. If the gain of the proposed method in this paper is traded from the threats to other defense/target models, it may weaken (otherwise strengthen) the significance of this paper. \n\n\n> random seed\n\nThe devil is in the details. Especially for scientific research at a top conference, the authors could be more serious and make the clarity of the paper better.", " Thank you for your reply! We have organized the responses into the following topics.\n****\n**About motivation:** \nOur motivation is based on the fact that the CE loss leads to a tendency - nodes with lower confidence on the label class contribute more on the structural gradient. This phenomenon leads to the result that the attacker almost ignores the potential vulnerability of nodes with other confidence levels. The losses mentioned by the reviewers is based on ‘margin’, which filter some nodes from the overall loss without change the form of CE (the ‘margin’ method is similar to self-paced learning). Besides, those papers consider that attackers should focus on nodes near the decision boundary, as we assumed before we start our work. The discovery of this paper is a counter-commonsense phenomenon that the attacker should focus on nodes at all confidence levels equally rather than only on nodes at low confidence levels (i.e., CE losses are not good for all classification-related tasks).\n****\n**About the scenario:** \nThe budget in the targeted attack problem is sufficient and the number of target nodes are small, so there is no need to consider the budget allocation problem in targeted attack. Our method can theoretically be applied for small or large graphs. Our experiments in PubMed (Q4&A4 in the first response) show that the performance of our method is not affected by the increasing scale of the graph. However, as we elaborated in Q2, the hardware is not sufficient to support our method on large graphs (because sparse matrices do not support gradient backpropagation). About '*only* consider a scenario', the studying of a specific scenario is quite common. In the surveys we have previously cited, you can actually find about half of the work focusing on a certain scenario.\n****\n**About transferability:** \nIn fact, gray-box attacks are the study of transferable attacks. This is because we use a the surrogate model (a GCN) to generate an attack strategy to attack GNNs of unknown architecture. The transferability of attacks is a broad concept, but not often talked about in the graph domain.\n****\n**About defense/target model:** \nI still insist that our experiments are consistent with the experimental setup of the compared baseline attackers, for a fair comparison, as these methods are studying the same scenario. If you read all the attack defense articles in the field, then you can find a very large number of experimental setups. Experiments exist in every article that have not been done in other articles. My understanding of the field is that the purpose of the attack model is to disrupt the generic GNN models, while the purpose of the defense model is to detect and defend against the attacks from the attack model.\n****\n**About random seed:** \nIf you reset the seeds once in a python file, then every time you call random numbers (e.g., initial parameters for GNN) after resetting you will get random numbers one after the other from a fixed random number sequence. Unless you reset the seed once before each initialization of the GNN model parameters, then you will get a different initialization. For example, we have a random number sequence R={r_1,r_2,...r_n}, and the GNN has k parameters. For the first time, the first GNN gets rundom number {r_1,...,r_k} for its initialization. Then, the second GNN gets {r_k+1,...,r_2k} for its initialization. This is also same for other GNNs. If you reset the seed before you initialize, for example the second GNN, then it will get {r_1,...,r_k}, which will be same as the first GNN's parameters. The code we share in the supplementary contains test.py. If you are interested in this coding trick, you can use this file to run GCN 10 times to see if the results differ.\n****\nOur response may be a bit long. We appreciate your patience if you read this all. Please get back to us if you have any other questions and we would love to continue the discussion. \n\nAuthors", " I thank the authors for the reply. However, it adds up the confusion about the *motivation* of this work:\n\nWhen listing the contributions, the authors target at budget allocation. While in the answer to Q1, the authors consider the transferability as the urgent issue in their study. Then\n- What is the main problem this paper aims to address?\n- If it is budget allocation, then is it really reasonable to *only* consider a scenario (untargeted attack, small scale graphs) that has relatively *sufficient* budgets, instead of the scenario (targeted attack, large scale graphs) that has relatively *insufficient* budgets?\n- If it is about transferability, how does the discovered issue relate to the transferability? To my knowledge, transferability has already been explored in the literature, such as TDGIA (Zou et al., KDD 2021), where they use plentiful defense/target models, while the authors only study three.\n\nRegarding the experiments, the authors aim to verify the existence of the budget allocation problem. However, without more results from different perturbation rates defense/target models, it is hard to convince the readers, specifically, whether the improvements are traded from the threats to certain defense/target models? Moreover, to my knowledge, several attack methods in other settings (please refer to my Q1 for more details) also adopt the same loss studied in the paper, but the authors haven't discussed nor analyzed.\n\nThe random seed is quite important. I am not convinced by \"Initializing victim model for 10 times in one python script under a fixed random seed will have different initial parameters. \". It's quite different from the common practice. Essentially, the values of seeds could affect several factors involved with randomness, such as the data loaders. This would add up my concerns about the reliability of the experimental results.", " [1] Zügner, Daniel, and Stephan Günnemann. \"Adversarial attacks on graph neural networks via meta learning.\" arXiv preprint arXiv:1902.08412 (2019). \n[2] Lin, Xixun, et al. \"Exploratory adversarial attacks on graph neural networks.\" 2020 IEEE International Conference on Data Mining (ICDM). IEEE, 2020. \n[3] Waniek, Marcin, et al. \"Hiding individuals and communities in a social network.\" Nature Human Behaviour 2.2 (2018): 139-147. \n[4] Liu, Zihan, et al. \"Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks.\" Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. 2022. \n[5] Inkawhich, Nathan, et al. \"Perturbing across the feature hierarchy to improve standard and strict blackbox attack transferability.\" Advances in Neural Information Processing Systems 33 (2020): 20791-20801. \n[6] Wang, Xiaosen, and Kun He. \"Enhancing the transferability of adversarial attacks through variance tuning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. \n[7] Demontis, Ambra, et al. \"Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks.\" 28th USENIX security symposium (USENIX security 19). 2019. \n[8] Suciu, Octavian, et al. \"When does machine learning {FAIL}? generalized transferability for evasion and poisoning attacks.\" 27th USENIX Security Symposium (USENIX Security 18). 2018. \n\n", " We are grateful to you for your effort in reviewing our paper. We will next respond to the weaknesses and questions you mentioned. Before answering the questions, we would like to sort out and highlight the practical contribution of this work, as well as to elaborate on the points of focus on which we designed the experiment.\n****\n**The contribution of this work:** \nThe main contribution of this paper is to *break the stereotype that \"attack objective is the convert of classification objective\"* and propose that their objectives should be inconsistent. We theoretically and empirically analyze the existence of this problem and propose a *confidence-based* attack objective as a solution. We expect this discovery to be an inspiration for others’ researches on utilizing graph structure gradients. \n1. **Principal contribution**: we first present the design flaw of the widely used cross-entropy attack loss from the perspective of budget allocation. This is a counter-intuitive conclusion since cross-entropy is widely used in the classification-related tasks. We give mathematics to prove that cross-entropy leads to the problem of unreasonable budget allocation. \n2. **Secondary contribution**: we propose an improved attack objective that abandons the form of cross-entropy and instead designs loss based on the confidence of the pseudo-labeled class. \n3. **Experimental validation**: The experiments aim to proof the existence of the budget allocation problem for CE-based attack loss and validate our proposed attack loss by comparing the attack performance. In addition, we design analytical experiments to visualize the consequences of the budget allocation problem and the optimization effect of our method.\n****\n**About the experimental design:** \nThe experimental design of this paper is driven by the purpose of a **fair comparison** with baselines [1,2,3]. The experimental section uses essentially the *same experimental design as baselines* (including the datasets, victim model, and attack budget) to prove our point. In addition, the analysis and visualization experiments are also designed to provide support for our point.\n****\n**Q1: The scope of the studied attack might be too limited.** \n**A1**: There are articles that focus on the problem of untargeted gray-box poisoning attacks on the graph structure [1,2,4]. A gray-box attack represents an attacker having access to training samples of the victim model; a poisoning attack represents the victim model being retrained after data contamination. Thus, the gray-box poisoning attack is concerned with the attack transferability. This means that researchers need to mine the graph data for important as well as vulnerable input dimensions. The problem of transferability is more studied in computer vision [5-8] but less studied in graphs. That is why we consider the transferability of attack on graphs is an urgent issue to be explored. \n****\n**Q2: About the feasibility on large scale datasets.** \n**A2**: The application of gradient-based attack methods on large scale datasets is limited by the hardware. The adjacency matrix for large datasets has a huge size taking up large amounts of memory. The computation of the gradient requires the forward process of the model to use a dense tensor as input, which results in a very large memory footprint for the gradient matrix. The memory of existing servers cannot support the implementation of the methods in this paper on large datasets.\n****\n**Q3: Confusion caused by ‘a fixed random seed’.** \n**A3**: The initial parameters of the victim models are different. We apology for the wrong expression 'fixed random seed'. Initializing victim model for 10 times in *one python script* under a fixed random seed will have different initial parameters. It belongs to the coding stuff that should not be detailed in the main body.\n****\n**Q4: Performance of our method on the PubMed dataset.** \n**A4**: We test GraD on PubMed (19717 nodes, 44338 edges) with the victim model being GCN. The result of each attack methods is shown as the classification accuracy of the poisoned data: Clean: 85.9%; DICE: 84.2%; EpoAtk: 83.8%; Meta_Train: 81.1%; Meta_Self: 78.4%; GraD(ours): 70.4%. We will put these results in the Appendix after the complete experiment.\n****\n**Q5: About undefined symbols and typos.** \n**A5**: We have revised these issues in the revised version. Changes are remarked in red.\n\n\n\n\n", " We are grateful to you for your effort in reviewing our paper. We will next respond to the weaknesses and questions you mentioned.\n****\n**Q1: Is there already a baseline corresponding to it in the table 1&2?** \n**A1**: The Meta-Self [1] is the baseline you mentioned, and comparing it with our method can be considered an ablation study. Both Table 1 and 2 contain a comparison of our method with this baseline.\n****\n**Q2: Can you show some white-box attack results of GraD and white-box baseline methods?** \n**A2**: We conduct an poisoning attack comparing with the white-box method CE-minmax [2] (implementation from DeepRobust [3]). Our proposed GraD is implemented in the CE-minmax’s backbone and test scenario. The attack budget is set to 5%. On the Cora dataset (Acc=82.3%): GraD: 74.4%, CE-minmax: 75.8%; on the Cora-ML dataset (Acc=83.1%): GraD: 74.7%, CE-minmax: 76.5%. The results demonstrate the applicability of our method to white-box attack methods. \n****\n**Q3: I hope that the explanation of the purpose of the poisoning attack is simply added to the introduction or related works.** \n**A3**: The poisoning attack studies the impact of attacked data on the model training. It is also a scenario encountered in an attack where the attacker has contaminated the data before the model is trained. We have added this sentence to the Introduction, marked in red (line 30-32).\n****\n**Q4: I want to know more scenarios to apply GraD other than gray-box / white-box attack on GCN-based node classification models if exist.** \n**A4**: This paper discusses the generation of the gradient on the graph structure from the node level. Thus, it is a global budget allocation for the nodes (i.e., samples). Our method has the potential to be applied to attacks on various data types that require budget allocation, such as model skewing with data poisoning. In addition, the method can be used in attribution analysis studies of node-level classification models.\n****\n**Q5: Why are there no line numbers in your submission?** \n**A5**: Sorry for missing the line numbers. It has been fixed in the revised version.\n****\n**Q6: I wish that the authors include ethics or broader-impact statement in the paper or appendix.** \n**A6**: Our proposed attacker is at risk of being used maliciously. A graph data holder should protect his graph structure, node attributes, and labels of training nodes. It has been added in Appendix as Broader Impact (line 476-481). \n****\n[1] Zügner, Daniel, and Stephan Günnemann. \"Adversarial attacks on graph neural networks via meta learning.\" arXiv preprint arXiv:1902.08412 (2019). \n[2] Xu, Kaidi, et al. \"Topology attack and defense for graph neural networks: An optimization perspective.\" 28th International Joint Conference on Artificial Intelligence, IJCAI 2019. International Joint Conferences on Artificial Intelligence, 2019. \n[3] Li, Yaxin, et al. \"Deeprobust: A pytorch library for adversarial attacks and defenses.\" arXiv preprint arXiv:2005.06149 (2020).\n", " We thank the reviewers for their comments and suggestions. We have revised the paper based on some of the suggestions and marked them in red. The revisions include: \n1. Add a brief intruduction for the poisoning attack. (**Line 30-32**)\n2. Correct symbols and typos. (**Line 115,118,131,137,169-170,312-313**)\n3. Correct a statement that could cause ambiguity. (**Line 249**)\n4. Add section Broader Impact. (**Line 476-481**) \n\nWe answer to each reviewer's questions individually in our response.\n****\nDiscussions with reviewers and some relevant literature have been added to the latest version .", " We are grateful to you for your effort in reviewing our paper. We will next respond to the weaknesses and questions you mentioned.\n****\n**Q1: About papers [1,2] from Ma et al..** \n**A1**: Ma et al. [1,2] focused on black-box attacks on node attributes. These works start from the importance of nodes in the graph and introduce prior knowledge to assist the attacker in selecting the nodes to be attacked. The scenarios studied in [1,2] differ from the attack scenarios discussed in this paper (i.e., gray-box poisoning attack vis edge perturbation). We add paper [2] as a reference in the related work.\n****\n**Q2: About how prevalent cross-entropy-based attack loss is in the field.** \n**A2**: Node classification is a common task in graph datasets. In existing white- and gray-box attacks for classification models, the form of cross-entropy is an essential component of the attack loss [3-6]. Our proposed confidence-oriented attack loss overturns this mindset, which has been justified in this paper.\n****\n**Q3: Existence of other edge perturbation methods.** \n**A3**: In addition to edge perturbation with gradient, methods exist in the field to solve edge perturbation problems with reinforcement learning (RL), such as RL-S2V [7] and ReWatt [8] (focused on graph-level classification). Perturbation strategy optimization based on RL is an interesting research topic; however, it is not the most efficient in this paper's task because reinforcement learning is computationally expensive.\n****\n[1] Ma, Jiaqi, Shuangrui Ding, and Qiaozhu Mei. \"Towards more practical adversarial attacks on graph neural networks.\" Advances in neural information processing systems 33 (2020): 4756-4766. \n[2] Ma, Jiaqi, Junwei Deng, and Qiaozhu Mei. \"Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem.\" Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. 2022. \n[3] Xu, Kaidi, et al. \"Topology attack and defense for graph neural networks: An optimization perspective.\" 28th International Joint Conference on Artificial Intelligence, IJCAI 2019. International Joint Conferences on Artificial Intelligence, 2019. \n[4] Wu, Huijun, et al. \"Adversarial examples for graph data: deep insights into attack and defense.\" Proceedings of the 28th International Joint Conference on Artificial Intelligence. 2019. \n[5] Zügner, Daniel, and Stephan Günnemann. \"Adversarial attacks on graph neural networks via meta learning.\" arXiv preprint arXiv:1902.08412 (2019). \n[6] Geisler, Simon, et al. \"Robustness of graph neural networks at scale.\" Advances in Neural Information Processing Systems 34 (2021): 7637-7649. \n[7] Dai, Hanjun, et al. \"Adversarial attack on graph structured data.\" International conference on machine learning. PMLR, 2018. \n[8] Ma, Yao, et al. \"Attacking graph convolutional networks via rewiring.\" arXiv preprint arXiv:1906.03750 (2019).\n\n", " This paper studies the problem conducting untargeted poisoning attacks against graph neural networks by pertaining edges of the original graph. The authors identified the problem of attack loss design, which mostly uses the same loss function (i.e., negative cross entropy) for model training, and pointed out that, with the wrong loss function, attack budget is wasted on causing misclassification for nodes that are already misclassified. Then a debasing solution is proposed by multiplying the gradients with their confidence scores and the obtained simple attack strategy outperforms the existing baselines significantly. I like the idea of the paper as it challenges some of the common beliefs in the attack design against graph neural networks, which is novel and signficant. The presentation of the paper is also very clear and easy to follow. Overall, the strength of the paper significantly outweighs the weakness. However, I have the following concerns:\n1) the work by Ma et al., [9] (reference in the paper) and one missing work linked below [1] might deserve more discussion, as they are more related to the key problem identified in the paper: the downside of choosing a wrong type of attack loss.\n[1] Ma et al., \"Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem\", WSDM 2022. \n2) the identified problem and fixable solution given in the paper is only demonstrated for the case of cross-entropy loss, and so the conclusion is not very generic.\n\n I do not list this as a question that will change my decision, but I am curious to know, if there are any other ways (e.g., designing new attack loss) to perform the edge perturbation instead of renormalizing the gradient. N/A", " - The authors tackle the problem of the attack objective (the form of cross-entropy function) in the untargeted attacks on node-level classification models. \n- They show that nodes with low confidence significantly affect the gradient.\n- To alleviate this problem of inefficient attack budgets, they propose a novel attack objective whose corresponding gradient is not affected by the confidence of nodes. \n- They conduct experiments on gray-box poisoning attack and shows that the proposed attack method GraD (based on the proposed gradient-debias attack objective) can significantly improve the attack performance. # Strengths\n- The authors states an interesting problem in the cross-entropy based attack objective. (Figure 1 was very helpful to understand the concept!)\n- The authors explain the proposed problem using a simple mathematics. Also, they mathematically explain their proposed attack objective can solve the problem.\n- They conduct fair comparison on various datasets and achieve the state-of-the-art attack performance.\n\n# Weaknesses\n- The usage of the method seems limited to gray-box / white-box attack on GCN-based node classification models. \n- Ablation studies seem insufficient. - I hope that the explanation of the purpose of the poisoning attack is simply added to the introduction or related works.\n- Why are there no line numbers in your submission?\n- I want to know more scenarios to apply GraD other than gray-box / white-box attack on GCN-based node classification models if exist. \n- If I understood correctly, we can apply GraD to white-box attack too. Can you show some white-box attack results of GraD and white-box baseline methods? \n- One can replace the attack objective of GraD to the original negative cross-entropy loss and compare it with GraD as an ablation study. Is there already a baseline corresponding to it in the table 1, 2? If not, I want to see this result.\n\nPOST REBUTTAL COMMENTS: The authors answer the questions. My concerns have been addressed. They can apply their method also in the white-box attack setting. They add broader impacts and fix some minor issues in the paper. I adjust my score from 6 to 7. This paper handle the adversarial attack which can be used by malicious adversaries. That said, I wish that the authors include ethics or broader-impact statement in the paper or appendix.", " This paper studies the graph modification attack in a untargeted poisoning setting by editing the edges, and reveals that using the negative cross entropy as the attack loss will cause unreasonable budget allocation issue. Thus, the paper proposes a new attack objective by plugging the confidence to reduce the influence of the negative cross entropy. The authors conduct some experiments to validate their findings. *Originality & Significance*: Studies around graph adversarial attack and defense are of great importance to the community. However, given the limited scope of the focus in this paper, the significance might be weakened. See the Questions below for more details.\n\n*Quality*: The authors conduct certain theoretical and empirical analysis to validate their claims. However, the experiments might not be sufficient to fully support the effectiveness of the proposed method. See the Questions below for more details.\n\n*Clarity*: The paper is well-organized yet not easy to follow. There are too many undefined mathematical symbols and typos.\n Starting from an interesting observation of the budget allocation in the untargeted poisoning graph modification attack, the authors reveal that using negative cross entropy loss can affect the gradients signals passed to the adversary hence the adversary keeps allocating budgets to attack the low confidence nodes. However, there are several concerns raised when I read the paper.\n\n1. The scope of the studied attack might be too limited. In fact, graph adversarial attacks can have both poisoning and evasion, targeted and untargeted, modification and injection attacks [1,2,3]. Especially, [3,4] point out that the poisoning attack can have severe issues due to the ill-defined imperceptibility. More justification and discussions are required for the reasons about why adopting the setting in the paper. Otherwise, the limited setting used and discussed in this paper weakens the significance and novelty of the paper, given that the main contribution of this work is to propose a new adversarial objective for graph adversarial attacks. \n\n2. Important experiments seem to be missed in the paper. Since the proposed objective can improve the budget allocation of the graph adversarial attack, the authors only conduct two experiments with 2 specific allowed budgets, which is significantly less than existing literatures [1,2,3,4,5,6,7], providing weak support to the effectiveness of the proposed objective.\n\n3. It would be more interesting if the authors could discuss and evaluate the budget allocation during the targeted attack to the large scale graphs [8,9] which tends to be appealing in practice, as we often do not have sufficient budgets to attack all of the test nodes in a large scale graph while many of node classification applications focus on the large scale graphs. In the literature, the graph injection attack can serve a good proxy [4].\n\n4. The scope of the experiments is also limited. The authors only test with GCN, GAT, GraphSage, while neglecting many defense methods [6,7], as well as methods such as Nettack that can also be leveraged to attack and see the survey [1,2,3] for more details. Some representative datasets such as Pubmed are missed.\n\n5. In experimental setup, ``Each experiment is repeated ten times with a fixed random seed’’, what is the point to fixing the random seeds and repeat the experiments? How do they produce different results if the random seeds are fixed?\n\n6. The writing is hard to follow. There are too many undefined mathematical symbols and typos. Here are some of them (not sure why authors cancel the line numbers, making the reviewers hard to point out the specific locations of their concerns):\n - In 3.1, ``while the prediction of f θ is denoted by the probability distribution P_vi’’\n - In Eq. 4, what is z_i?\n - After Eq. 4, `` where V ∗ is a subnet of nodes’’\n - In Sec. 4, is it the linearized GNN, SGC or GCN?\n - In Sec. 4, y_i is undefined.\n - In Eq. 6, what is A^grad? What is A^grad with subscript v_i? Does it mean taking some entry in A^grad?\n - After Fig. 4, ``We consider this to be since the node deviates from the centroid of any class.’’\n\n\n\n\nReferences:\n\n[1] Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal, Jiliang Tang. Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies. SIGKDD Explorations 2020.\n\n[2] Lichao Sun, Yingtong Dou, Carl Yang, Ji Wang, Philip S. Yu. Adversarial Attack and Defense on Graph Data: A Survey. arXiv 2020.\n\n[3] Qinkai Zheng, Xu Zou, Yuxiao Dong, Yukuo Cen, Da Yin, Jiarong Xu, Yang Yang, Jie Tang. Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning. NeurIPS 2021 Datasets and Benchmark Track.\n\n[4] Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng. Understanding and Improving Graph Injection Attack by Promoting Unnoticeability. ICLR 2022.\n\n[5] Shuchang Tao, Qi Cao, Huawei Shen, Junjie Huang, Yunfan Wu, Xueqi Cheng. Single Node Injection Attack against Graph Neural Networks. CIKM 2021.\n\n[6] Xiang Zhang, Marinka Zitnik. GNNGuard: Defending Graph Neural Networks against Adversarial Attacks. NeurIPS 2020.\n\n[7] Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, Jiliang Tang. Graph Structure Learning for Robust Graph Neural Networks. KDD 2020.\n\n[8] Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, Stephan Günnemann. Robustness of Graph Neural Networks at Scale. NeurIPS 2021.\n\n[9] Enyan Dai, Tianxiang Zhao, Huaisheng Zhu, Junjie Xu, Zhimeng Guo, Hui Liu, Jiliang Tang, Suhang Wang. A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. arXiv 2022.\n The authors did not provide such a discussion." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "R2SXzCfwge", "ynukfXIPmMX", "FtQrDa1Jl-2", "FtQrDa1Jl-2", "GuFwtWroIOo", "FtQrDa1Jl-2", "Si7LQjz4bLG", "1tfPnJMXTPv", "9Nlpqva0GY8", "cCRIjMCTWHc", "L505chbc8oE", "UYDqGwdLDj7", "G2OLDLlcjX", "Wsle5sxb8Tx", "UYDqGwdLDj7", "RS3lM2EfeqP", "ZxUXaK313sV", "NIHICjMgGgM", "gz_ITAEdvni", "gz_ITAEdvni", "Imfyr8NILL8", "bDH2W8aZ_W", "nips_2022_vkGk2HI8oOP", "b7_BTEvZh7M", "nips_2022_vkGk2HI8oOP", "nips_2022_vkGk2HI8oOP", "nips_2022_vkGk2HI8oOP" ]
nips_2022_319xcX5qIcO
Signal Recovery with Non-Expansive Generative Network Priors
We study compressive sensing with a deep generative network prior. Initial theoretical guarantees for efficient recovery from compressed linear measurements have been developed for signals in the range of a ReLU network with Gaussian weights and logarithmic expansivity: that is when each layer is larger than the previous one by a logarithmic factor. It was later shown that constant expansivity is sufficient for recovery. It has remained open whether the expansivity can be relaxed, allowing for networks with contractive layers (as often the case of real generators). In this work we answer this question, proving that a signal in the range of a Gaussian generative network can be recovered from few linear measurements provided that the width of the layers is proportional to the input layer size (up to log factors). This condition allows the generative network to have contractive layers. Our result is based on showing that Gaussian matrices satisfy a matrix concentration inequality which we term Range Restricted Weight Distribution Condition (R2WDC) and which weakens the Weight Distribution Condition (WDC) upon which previous theoretical guarantees were based. The WDC has also been used to analyze other signal recovery problems with generative network priors. By replacing the WDC with the R2WDC, we are able to extend previous results for signal recovery with expansive generative network priors to non-expansive ones. We discuss these extensions for phase retrieval, denoising, and spiked matrix recovery.
Accept
This paper focuses on theoretically studying signal reconstruction with non-expansive generative networks. In short the authors show that with a random Gaussian generator, any signal in its range can be reconstructed from Gaussian measurements. This holds as long as the number of measurements and the width of all layers are proportional to the size of input layer. Compared to prior work this paper removes the requirement of expansion of the layers. Most reviewers thought the paper was interesting and thought the improved theoretical analysis was nice. The reviewers also raised a variety of technical concerns most of which was addressed during the rebuttal. I concur with the reviewers and think is a nice contribution despite some flaws and am recommending acceptance. I urge the authors to follow the details comments of the reviewers to improve their manuscript for the camera ready version of the paper.
train
[ "kCcSeKyFSVO", "8FQWEkpx6k7", "Toh427gvA9s", "lm3j_JE8Azow", "dr0IarwpLZi", "3lE50hOp7FU", "IDS8RczrKjO", "i_kGqb0ES5U", "vrkbdJl9W1M" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nbelow are the answer to your questions\n\n- Previous theoretical results proving $m = \\widetilde{\\Omega}(k)$ lower-bounds were given in [Ra] and [Rb]. Notice that studying (sharp) information-theoretic limits is beyond the scope of this paper, which instead is devoted to analyzing the performance of practical gradient descent methods for solving signal recovery problems with generative network priors.\n\n- $Q_{r,s}$ is the expected value of $W_{+,r}^T W_{+,s}$ under Gaussian weights distribution. It allows to control how a ReLU layer distorts angles. In particular, $r^T Q_{r,s} s$ is the expected value of $ReLU(W r)^T ReLU(W s)$. We have added a remark on this in Section 2 at lines 142 - 143.\n\nRegarding the dependencies on the network widths and depth in the proved bounds. One of the objectives of this paper was to prove sample complexities linear in the latent dimension k of the signal and polynomial in the depth d. We agree that the dependencies on the network widths and depth are undesirable, and we leave getting sharper bounds for future works. \n${}$\n\nRegarding extracting relevant guidelines from our theory. The main motivation of this paper was to understand the empirical observation that gradient-descent methods can be successfully used to solve compressive sensing (and other signal recovery problems) with generative network priors, _despite_ the non-convexity of the loss functions minimized. Notice that even if in practice one needs a few random restarts of gradient descent to obtain good results, it is still surprising that such highly non-convex functions can be minimized (potentially it is NP-Hard). So how does this informs the use of generative networks? Our theory demonstrates that the loss landscape of minimization problems such as (1), while non-convex can still be well-behaved as long as the weights have well-behaved distributions and the contractive layers and number of measurements are not too small compared to the latent dimension of the generative network. Inspired by our results, one could study methods for regularizing the distribution of the weights of a generative network, to make easier the minimization of (1). \n\n------ \n\n### **Other remarks**:\n\n#### _Numerical Experiments_\n\nBased on some of your previous comments we have added a section in the appendix (Appendix H) with synthetic experiments similar to those in [20]. These experiments validate our theoretical findings and show that they hold in a wider parameters range (in particular with a milder dependency on the depth d).\n\nIn Appendix H we also discuss how to compute the subgradients of the loss function, and give a practical algorithm that instead uses the \"derivatives\" as computed by commonly used deep learning libraries. \n\n#### _Denoising effect_\n\nAs initially suggested we have expanded on the denoising effect of Algorithm 1 (line 92 - 95). We demonstrate it in the synthetic experiments (Appendix H). We moreover notice that this denoising effect has already been empirically observed in previous works for trained generative networks (e.g. [3]). Hence, our theoretical results provide theoretical insights into this phenomenon. \n\n------ \nReferences:\n\n [Ra] A. Kamath, E. Price, and S. Karmalkar, “On the power of compressed\nsensing with generative models”, in International Conference on Machine Learning, 2020.\n\n [Rb] Z. Liu and J. Scarlett, “Information-theoretic lower bounds for compressive sensing with generative models”, in IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 1, pp. 292–303, 2020.", " I would like to thank the authors for their comments ans answers!. \n\n* Regarding the argument around information-theoretic optimality: the provided argument works for linear models (with oracle known subspace and solving linear systems etc.). I am not sure if this holds for nonlinear networks and the limit can be worse then (this can be good for the paper: say hypothetically, the IT bound would depend on $k$ and be logarithmic in $n_i$- in this case, the obtained bound of the paper could be actually claimed to have IT optimal dependencies there). I think the argument for information theoretic limits should way mathematically more rigorous. As far as I can see, I cannot find a mathematical proof of information-theoretic limits. Of course, I would be more than happy if the author could point me to this argument. \n* I still prefer that the authors to discuss the intuition behind $Q_{r,s}$ in the main paper. Let me ask a question more directly: what is the intuition behind $Q_{r,s}$? Can you please comment here? Note that this can probably help coming up with new practices say adding a regularization term during training. \n\nOf course the paper relaxes some of the existing issues of the current theory (expansiveness), it still has many limitations and undesirable dependencies as mentioned by the other reviewers. This is of course natural for theoretical developments, and I would still favor the acceptance, if the author can clarify further what the current theory, despite its limitation, can tell us about the practice of generative priors. ", " Thank you for carefully reading our paper and the many interesting questions. Below our comments. \n\n- Regarding citation [21]. Thank you for pointing this out. We have moved this citation to the main body of the paper (line 67-68).\n\n- Regarding the complexity of certifying the R2WDC. Great question! Indeed establishing that the R2WDC is satisfied could also be in general hard. We leave this problem for future work and hope that, as for the RIP, even if it is demonstrated that certifying the R2WDC is hard, it will prove to be useful to better understand the performance of signal recovery methods based on generative networks. \n\n- Regarding the theoretical assumptions on the weight matrices. The fundamental assumptions on the weights of G used in the analysis are the symmetry of the Gaussian distribution and its strong concentration properties. These properties lead to non-convex but well behaved minimization problems like (1) whose geometry suggests checking the condition $f(x_t) < f(-x_t)$ in Algorithm 1.\n\n\n- Regarding rate-optimality. To clarify, in [20] the authors claim that the $m = \\widetilde{\\Omega}(k)$ is rate-optimal or information-theoretically optimal with respect to $k$ (up to log factors in $n$ and polynomials in $d$). This is indeed optimal, as the following reasoning roughly shows. Consider an oracle that would give the subspace in the range of $G$ in which the target signal $y_\\star$ lies. With this information, the problem becomes a simple linear problem over a fixed subspace and would require a number of measurements exactly $k$. If one had then less than $k$ number of measurements, then recovering $y_\\star$ exactly would be impossible as fixing all the degrees of freedom would be impossible. \n\n- Regarding Radamacher's theorem. Thank you for the suggestion, we have rephrased the discussion around the Clarke subdifferential (see line 149-150). \n\n- Regarding the R2WDC with the $\\mathcal{N}(0, 2/n_i)$ entries. Yes, strictly speaking with this scaling the definition of the R2WDC would need a “rescaling”, notice indeed that with this scaling the expected value of $W_{+,s}^T W_{+,r}$ is $2 Q_{r,s}$. \n\n- Regarding the polynomial scaling with respect to the depth $d$ of the network. As in the previous literature, these have not been optimized and likely to be sub-optimal. As we mention at the end of the paper, we leave establishing sharper bounds for future works. \n\n- Thank you for noticing those two typos in the appendices.\n\n- Regarding the interpretation of $Q_{r,s}$, the practical implementation of Algorithm 1 we refer the reader to the original paper where these were defined. Similarly, since the sketch of the proof of convergence of Algorithm 1 was already given in [20], we prefer to focus on the novel contributions and ideas of this paper (e.g. discussion in Section 4.1).\n\n\n", " We really appreciate your positive feedback and the careful reading of our paper. Below are our answers to the questions and comments raised. \n\n\n\"On **Weakness**\" \n\n- Regarding the dependence from $2^d$ in the number of iterations $T$ and the size of the noise $\\| \\| \\eta \\| \\|$, notice that because of the ReLU layers the norm of the output $G(x_\\star)$ of the network scale like $\\| \\|x_\\star\\| \\|/2^{d/2}$, and similarly the loss function $f_{CS}(x_0)$ as $\\| \\|x_0\\| \\|^2/2^d$ (see Proposition C.1). Therefore up to change of constants, these bounds for can be written as $T\\leq f(x_0)/(d^4 y_\\star \\epsilon)$ and as $ \\| \\|\\eta \\| \\|\\leq \\| \\|y_\\star\\| \\|/d^{42}$. We have modified Remark 1 clarifying this point. \n\n${}$\n\n- Regarding the R2WDC scaling. Notice that with the scaling of the paper one has $\\| \\| \\text{ReLU}(W x)\\| \\approx \\|\\| x \\| \\|/\\sqrt{2}$. With the $\\mathcal{N}(0, 2/m)$ scaling one would obtain $\\| \\| \\text{ReLU}(W x)\\| \\approx \\|\\| x \\| \\| $. The R2WDC then would still hold (modulo multiplying the operator $Q_{r,s}$ by a factor of 2).\n\n \"On **Questions**\" \n\n- The term $\\sqrt{k/m} \\, O(\\| \\| \\eta \\| \\|) $ (as opposed to $O(\\| \\| \\eta \\| \\|) $) appearing in the reconstruction bound follows from a more careful analysis of the perturbation of the gradient due to $ \\eta $. The perturbation is given by $\\Lambda_x^T A^T \\eta$. In [20] it is shown that $\\Lambda_x^T A^T$ is $O(1)$. Here we notice that $A^T \\eta$ is gaussian of dimension $m$ and $\\Lambda_x^T$ \"projects it\" on subspaces of dimension $k$. This leads to a \"denoising effect\" of the order $\\sqrt{k/m}$. \n\n${}$\n\n- We agree that understanding the optimal poly dependence on the depth $d$ is an interesting and under-developed area of research. However, notice that both [R20] for compressed sensing and [R17] for phase retrieval establish a sample complexity of the order $m \\geq C_\\epsilon d k \\log(n_1..n_d)$. So both have a linear explicit dependence on $d$. On the other hand, the factors $C_\\epsilon$ in the two papers differ and have implicit dependence on $d$ through $\\epsilon$, so really the sample complexity can be thought of as $m \\geq poly(d) k \\log(n_1..n_d)$. We are not aware of any papers establishing the sharper linear dependence on $d$ for compressed sensing and phase retrieval. \nRegarding this paper, we use a more refined counting of the number of subspaces containing the range of $G$ and show that the sample complexities are of the order $m \\geq poly(d) k \\log(n_1/k..n_d/k)$. \n\n${}$\n\n- Thank you for the suggestion! We have expanded the paragraph after the definition of the R2WDC (lines 195-195) commenting on this point.\n\n\n", " We thank Reviewer xC1X for the positive feedback on the theoretical results of this paper. \n\nRegarding our contribution and motivations. In summary, previous theoretical works for efficient recovery with (random) generative network priors were only given for expansive networks [R20] (while generative networks used in practice have contractive layers) or for non-gradient descent methods [R22] (many signal recovery methods based on generative networks are based on gradient descent methods). The motivations and contributions of this paper are to fill the gap between empirical results and the (though stylized) theory.\n\nBelow are more details on our contributions and motivations.\n\n\n- We notice that modern state-of-the-art generative networks have layers near the outputs that are often larger than the output itself. These networks are therefore non-expansive. For example, in the StyleGAN2 architecture trained on 3 × 256 × 256 images, the output of the second to last layer has dimensions 64 × 256 × 256. We have added a comment on the expansivity of commonly used generative networks on lines 57-59.\n\n${}$\n\n- Regarding the expansivity assumption, we notice that [1] requires strict logarithmic expansivity, while [2] requires strict constant expansivity. In this paper, we show that *no strict expansivity* is required. \n\n${}$\n\n- Finally, while [R22/3] was also able to remove the assumption on the expansivity of the networks this was at the expense of exponential dependence on the depth and the use of a non-standard iterative method. Algorithm 1 of this paper instead is a gradient descent method, closer in spirit to the ones often used in practice and does not suffer from exponential dependence on $d$. \nWe have added a remark on these contributions on lines 77-81 \n\n${}$\n${}$\n\n[1] Hand, P., & Voroninski, V. (2018, July). Global guarantees for enforcing deep generative priors by empirical risk. In Conference On Learning Theory (pp. 970-978). PMLR. \n\n[2] Daskalakis, C., Rohatgi, D., & Zampetakis, E. (2020). Constant-expansion suffices for compressed sensing with generative priors. Advances in Neural Information Processing Systems, 33, 13917-13926. \n\n[3] Joshi, B., Li, X., Plan, Y., & Yilmaz, O. (2021, October). PLUGIn-CS: A simple algorithm for compressive sensing with generative prior. In NeurIPS 2021 Workshop on Deep Learning and Inverse Problems.\n\n\n\n\n", " The paper extends the previous theoretical studies around convergence of subgradient descent (Algorithm 1 from [20]) for ReLU generative priors to cases where the generative model is not necessarily expansive. The result relies on the weights being drawn from i.i.d. Gaussian matrices and assumes network widths and measurement numbers are proportional to input dimension (See Theorem 1.1, conditions 1 and 2). The results extend to other inverse problems like phase retrieval, denoising and spiked matrix recovery. \nTheorem 5.4 provides the result for random Gaussian matrices and weights, and Theorem 4.4 provides a more general result for any matrices satisfying RRIC and R2WC. Lemma 5.1 is well known in the literature. Lemma 5.2 and 5.3 are proven in the appendix and together with Theorem 4.4 imply Theorem 5.4.\nThe extensions to phase retrieval, denoising and spiked matrix recovery are given in the supplementary materials, although the proofs are not explicitly given. \n **Strength**\n\n* The paper improves the analysis of [22] (also [21]) and gets much better dependence on the number of layers $d$ and removes the dependence of width of layer $i$ on the layer index $i$. \n* R2WDC is weaker that WDC condition, and nonetheless provides better results. \n* The proof builds on many previous works, for example [18], [20], [22], and, based on my rapid read, is sound and well presented.\n* The contribution of the paper, namely relaxing the constraints on the network further and introducing R2WDC, is a good step forward in this analysis.\n\n**Weakness**\n\n* Some important limitations of the paper are not mentioned clearly by the authors, and some of the statements are not fully precise (see some comments below – for example, it seems to me that the sample complexity has drastic dependency on $d$; also on information theoretic optimality.).\n* The authors do not extract relevant guidelines from their theory. Practical generative models are trained from data. The paper considers generative models with random Gaussian weight, which is fine to derive the theoretical analysis. Even if the results are not directly applicable to practical networks, it is important to extract theoretical insights from the developed theory (like the authors’ comment on denoising effect of Algorithm 1 and generative priors). This is missing from the paper. See also my comments on gradient descent issues for trained generative priors.\n* Having numerical results, for instance similar to those in [20], can always help communicating the paper’s contribution better.\n* The paper could have been organized better by removing some discussions to the supplementary materials and presenting first the core idea behind the proof (similar to [20]).\n \n* [21] seems to be the extended NeurIPS version of workshop paper [22]. It is probably better to use this version instead of [22]. \n* I suggestion expanding on the denoising effect of the algorithm 1 as mentioned in page 3, line 89. This is an interesting consequence. \n* A relevant question, from practical perspective, is to see if one can verify R2WDC for a pre-trained network (not a random one). It is known that verifying RIP property of a matrix is NP-hard. It would be interesting if the authors can comment on it. \n* Gradient descent-based methods used in context of trained generative priors suffer from lack of convergence and usually require occasional restarts. This is in contrast with the current claims of optimality in the paper. This merits some comments from the paper. Which assumption in the theoretical framework is likely to be violated for this to happen? Gaussian assumption is a good candidate, but it is just an instance of distributions satisfying R2WDC. Is it related to details of Algorithm 1, for instance the check $f(-x_t)<f(x_t)$? Some comments on this can be helpful for the paper.\n* In page 2, the authors mention that the method of [20] is information theoretically optimal given $m=\\tilde{\\Omega}(k)$. As far as I can see [20] does not claim information theoretic optimality. Why is this the case? Is there any work providing lower bound on the sample complexity for random generative models (for example like Gelfand width analysis in compressed sensing)? A similar claim is made in the final part of the paper. \n* Please add a few sentences to the paper on how the subgradient can be computed in practice. Of course, for smooth generative models, this is not an issue, but for ReLU networks, the question is whether the gradient descent as implemented in backprop is a good proxy.\n* I suggest rephrasing the mention of Rademacher’s theorem in line 149, page 4. The implication of Rademacher’s theorem, roughly, is that Clarke’s subdifferential is well behaved since the $\\text{dom}(\\nabla f)$ has full measure. With current phrasing, this point is not clear.\n* I suggest adding the intuition behind the matrix $Q_{r,s}$ to the main paper (for example by mentioning the connection with measuring angle distortion by $x\\to W_{+,x}$).\n* The $\\epsilon$ in Theorem 4.4 is $O(1/d^{90})$. This term would dominate the rest of terms in the sample complexity condition (11) (see Assumption A.3). Basically, $m\\geq \\hat{C}_\\epsilon$ implies at least $m\\geq d^{90}$. This is a poor dependency on $d$, although it is present in the related works too. \n* In Remark 1 and 2, it is suggested that the factors $2^d$ can be removed with the entries of $W_i$ drawn from $\\mathcal{N}(0,2/n_i)$. Would not this violate the requirements for R2WDC condition?\n* Apart from $\\hat{C}_\\epsilon$, the condition A.3 of Assumptions A has implicit linear dependence on $d$. Consider a non-expansive network with $n_i=n$ for all $i$. Then the sample complexity lower bound includes $dm$. Similar arguments can be made of assumption B.2 and for $n_i$, namely $n_i\\geq k.i \\log (ne/k)$. The authors should comment on this.\n* Typo: in two places in the paper, RRWDC is used instead of R2WDC. \n Although the paper is theoretically interesting and a step forward, I feel that the authors can do a better job in communicating the idea, extracting useful guidelines and presenting the limitations.", " This paper relaxes an assumption in previous theoretical work on signal recovery with generative networks. Namely, previous work required that the generative network have constant-width layers (and before that, logarithmically expanding layers). This work shows that the generative model may preserve the signal recovery guarantees, while having layers of contracting width. Strengths:\n- Solid theoretical result on a relaxed assumption for generative model-based signal recovery.\n\nWeaknesses:\n- Motivation is lacking. The paper asserts that relaxing the assumption in question is important, as contractive layers are \"often the case in real generators\". However, this is stated without citation, and I am not sure it is true. Most GANs and VAEs involve constant or growing width of layers in the generative network. The only provided support for relaxing this assumption is a citation of [1]. However, I could not find this statement anywhere in the cited work. The authors did state that relaxing the logarithmically expansive width assumption was important, but that has apparently already been shown by [2, 3].\n\nOverall, it is unclear to me how important the theoretical improvement is, as this is not my main area of expertise, and the paper does not properly motivate the contribution. However, I believe that this work provides a straightforward contribution to theory in this field.\n\n[1] Hand, P., & Voroninski, V. (2018, July). Global guarantees for enforcing deep generative priors by empirical risk. In Conference On Learning Theory (pp. 970-978). PMLR.\n[2] Daskalakis, C., Rohatgi, D., & Zampetakis, E. (2020). Constant-expansion suffices for compressed sensing with generative priors. Advances in Neural Information Processing Systems, 33, 13917-13926.\n[3] Joshi, B., Li, X., Plan, Y., & Yilmaz, O. (2021, October). PLUGIn-CS: A simple algorithm for compressive sensing with generative prior. In NeurIPS 2021 Workshop on Deep Learning and Inverse Problems. Please answer the motivation question above. No.", " This paper presents a theoretical analysis for signal recovery with non-expansive generative networks. The main results suggest that given a random Gaussian generator, any signal in its range can be reconstructed from Gaussian measurements as long as the number of measurements and the width of all layers are proportional to the size of input layer. This result improves upon the earlier analyses that require width of layers to expand. Strengths. \n- The paper presents a new analysis for the general signal recovery using generative priors. \n- The main contribution is in relaxing the conditions on the width of the network layers. \n\nWeaknesses. \n- This analysis makes a strong assumption that the network has Gaussian weights. This is quite far from real settings where the network is learned from some data. Authors acknowledge that as a limitation. It will be good for the community to start analyzing this real problem. \n\n\n I did not check the derivations for accuracy, so I do not have any specific question at this point. n/a", " This paper considers the problem of inverse problems with generative priors. Prior work has shown that gradient descent recovers the ground truth in poly-time under the assumption of generative models with random weights and sufficient expansion. As real world networks do not satisfy the expansivity assumption, recent work has tried to relax the assumptions. The most relevant prior work [22] allows for contractive layers (i.e., the size of layer $i$ can be smaller than layer $i-1$), but requires the size of layer $i$ to grow exponentially with $i$. This further implies that the number of measurements required for compressed sensing / phase recovery grows exponentially with the depth of the generative model, while traditional results only required linear / quadratic dependence on depth.\n\nThis work shows that gradient descent converges in poly time, and only requires the size of each layer and number of measurements to grow polynomially in $d$. The results are novel and significant, and is in my opinion a valuable contribution to the field. Strengths:\n+ This paper considers an open problem in the field of solving inverse problems with generative priors. In my opinion it is sufficiently important and significant.\n\n\n+ The analysis proposes a new condition, called the Range Restricted Weight Distribution Condition (R2WDC). This condition is seemingly the key to prove that contractive generative models can also be used. To the best of my understanding, the traditional WDC condition required an isometry between each successive layer _for all_ vectors in $\\mathbb{R}^{n_i}$, which led to the requirement that $ n_{i+1} \\geq c_i n_i \\log n_i$. Under the new R2WDC condition, this isometry needs to only hold for vectors in $\\mathbb{R}^{n_i}$ that can be generated by the neural network, which allows for the possible contraction between the layers. The idea is intuitive, and I think the paper provides a good explanation for its success.\n\n+ The results are clearly stated and compared to existing results.\n\nWeaknesses:\n- In theorem 4.4, the number of steps $T$ is bounded by $2^d$, which is not truly polynomial in the generator parameters. However, this can be forgiven as $d$ is typically on the order of $\\log n$. \n\n- This is similar to the above complaint -- theorem 4.4 bounds the norm of the noise as $ || \\eta || \\leq \\frac{||x||}{poly (d) 2^{d/2}}$. This is not a very big problem, but perhaps the claims for the denoising effect of gradient descent (for e.g., in lines 87 - 90) can be toned down considering how the noise norm must be much smaller than the norm of $x$.\n\n- I'm not sure I agree with Remark 1 . How would the R2WDC condition be satisfied if you scaled the weight matrices? As a simpler counter example, if $W \\in R^{m \\times n}$ is such that $W_{ij} \\sim N(0,1/m)$, then $||Wx|| \\approx ||x||$. Now, if $W_{ij} \\sim N(0,2/m)$, then you get $||Wx|| \\approx 2||x||$, and it seems like the requirements in R2WDC would not be satisfied.\n\n I have no major questions or concerns-- I am listing my minor concerns as clarifications.\n\n- Line 87 - 90: Can the authors comment on whether the $O(|| \\eta ||_2)$ error in [20] appears due to the assumption that $m \\approx k \\log n$? \nIs there something special about the analysis in this paper that allows for the $\\sqrt{k/m}$ term which does not appear in [20]? \n\n- It is generally assumed that phase retrieval is a more difficult problem than compressed sensing, for e.g., random generative priors require $kd \\log n$ measurements for compressed sensing, but the best known results for phase retrieval require $k d^2 \\log n$. I would appreciate a short paragraph on whether a similar phenomenon appears in this paper, as I think it would be an interesting open problem to find the optimal poly dependence on $d$. \n\n- Perhaps for ease of understanding, it is worth clarifying that the benefit of R2WDC is that it takes into account the whole generative model from layer 1 to $i$, as opposed to WDC, which only considers the input / output pair of layer $i$ without considering the previous layers.\n\n The limitations are sufficiently stated." ]
[ -1, -1, -1, -1, -1, 5, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, 4, 2, 2, 5 ]
[ "8FQWEkpx6k7", "Toh427gvA9s", "3lE50hOp7FU", "vrkbdJl9W1M", "IDS8RczrKjO", "nips_2022_319xcX5qIcO", "nips_2022_319xcX5qIcO", "nips_2022_319xcX5qIcO", "nips_2022_319xcX5qIcO" ]
nips_2022_1ItkxrZP0rg
A Spectral Approach to Item Response Theory
The Rasch model is one of the most fundamental models in item response theory and has wide-ranging applications from education testing to recommendation systems. In a universe with $n$ users and $m$ items, the Rasch model assumes that the binary response $X_{li} \in \{0,1\}$ of a user $l$ with parameter $\theta^*_l$ to an item $i$ with parameter $\beta^*_i$ (e.g., a user likes a movie, a student correctly solves a problem) is distributed as $\mathbb{P}(X_{li}=1) = 1/(1 + \exp(-(\theta^*_l - \beta^*_i)))$. In this paper, we propose a new item estimation algorithm for this celebrated model (i.e., to estimate $\beta^*$). The core of our algorithm is the computation of the stationary distribution of a Markov chain defined on an item-item graph. We complement our algorithmic contributions with finite-sample error guarantees, the first of their kind in the literature, showing that our algorithm is consistent and enjoys favorable optimality properties. We discuss practical modifications to accelerate and robustify the algorithm that practitioners can adopt. Experiments on synthetic and real-life datasets, ranging from small education testing datasets to large recommendation systems datasets show that our algorithm is scalable, accurate, and competitive with the most commonly used methods in the literature.
Accept
This is a strong paper with interesting theoretical results and important practical contributions (much faster parameter learning than prior methods with little performance drop-off). All reviewers agreed it was above the bar for acceptance to NeurIPS.
train
[ "Z1PfbV0gnc", "uCMtcM9E-Ek", "-ihXeFBct6", "_y8E0HDOuey", "LfT-WNCQuN", "dnmnq8Tyykz", "3qTQmnxzSWh", "kQYktGNVG-r", "cvCM9-W1LV3", "QyiB-XjL4pi", "czKBB9agFID", "iCZm8ZowkZR" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. My concerns are addressed.\n\nI encourage the authors to describe the similarities and differences in the proof technique as compare to the Rank Centrality paper. This will make the paper stronger, not weaker.", " Thank you very much for the targeted and professional response. I think the discussion of modeling with MIRT is a great addition and will be welcomed by practitioners and the new Bayesian results are also a good addition since that is a highly used baseline for practitioners. Thank yo ualso for the clarification to prior work above - I thin that paragraph would definitely help situate the paper.", " I'm happy with the authors' response - especially with the new results on the full Bayesian method. Although the AUC dropoff on a large dataset, ML-100K, is not minimal, the overall efficiency improvement is significant. If other reviewers are happy with the response I can up my score. ", " We hope that the additional experiments in our main reply above address your question on how our method would compare to a Bayesian method. The Bayesian algorithm that we compare the spectral method to is based on a recent and well-cited paper [2] so we believe that it is a strong baseline. The two methods are comparable in terms of accuracy. However, the Spectral method is orders of magnitude faster, and the difference is especially pronounced on large datasets with sparse responses like ML-100K.\n\n> at least discuss what Bayesian estimation methods would do ... where you know the real item parameter prior \n\nMMLE is a semi-Bayesian method where one first assumes a prior distribution over the user parameters. The algorithm then maximizes the marginal likelihood in which the user parameters have been integrated out, to obtain the item parameters. Figure 1 in our main paper shows how such an algorithm might do when the prior is correctly specified vs incorrectly specified. There we see that the accuracy of the prior significantly impacts the overall accuracy of the algorithm. \n\nYour comments on the discussions of our experiment results are well appreciated. If accepted, we will certainly discuss the experiment results in greater depth as the final version allows more space. \n\nAn important question that you posed and can be a really good discussion point is\n> Why AUC, and log-likelihood are similar while top-K accuracy differs significantly between the methods\n\nFor smaller-scale education datasets (UCI, 3 Grades, LSAT) the number of items (tests) is small and there are no missing responses. There is much data relative to the number of parameters. As the algorithms, after all, operate on the same statistical model, they output very similar estimates and the performance is very close. The difference is more apparent in large-scale datasets and in top-K accuracy (which is not defined for the education datasets) where the responses are sparse and there is a large number of items.\n\nTo transform rating data to binary response data, for each user, we convert all ratings higher than the average to 0 and 1 otherwise (an item with a higher parameter value is better) like in [3]. The reference top-K set is the set of items with the highest average ratings. However, we have also removed items with very high but few ratings (<= 10) from this reference set as we treat these ratings as noisy. \n\nNote that the $K \\in [10, 25, 50]$ are relatively small compared to the number of items in the large-scale datasets that we use. This is applicable in settings such as RecSys where we care about identifying the top few items. We observe that the other methods tend to \"overvalue\" items with noisy responses whereas our method, which uses pairwise differentials, is less susceptible to noisy ratings. This could explain why in some datasets, some of the baseline algorithms fail to identify any of the top items. Perhaps, this highlights the limitations of these classical IRT methods in applications like RecSys where there is much noisy data. \n\nOn the other hand, AUC and log-likelihood are evaluated on a held-out dataset that contains a large number of items and users. The difference among the methods, when averaged over the users and items, is quite small. Therefore, the performance difference becomes less significant.\n\nAs for the results on the BX dataset, we would like to point out that the difference between the spectral method and the best method (MMLE) is relatively small. So the spectral method is not significantly worse than the competitors. \n\n> Some minor points about the presentation\n\nThese are all good points and we'll certainly address these in a final version.\n\n[2] Natesan et al., Bayesian Prior Choice in IRT Estimation Using MCMC and Variational Bayes\n\n[3] Lan et al., An Estimation and Analysis Framework for the Rasch Model (ICML 18')", " We hope that our main reply above addresses both of your comments about potential extensions of the spectral method as well as joint parameter estimation. \n\nFor joint parameter estimation, another approach is to run the spectral algorithm twice, one for the item parameter and the other one for the student parameter. Algorithm 1 estimates the item parameter $\\hat\\beta$ using the data matrix $X$. Using $X^T$ instead, Algorithm 1 gives an estimation of the student parameter $\\hat \\theta$. However, since the spectral algorithm essentially learns the 'difference' in the parameters and outputs a normalized version of the user and item parameters, one needs to \"align\" the two sets of parameters. Fortunately, this alignment only comes in as a single scalar $a$. We estimate $\\hat a$ by solving the following problem (keeping $\\hat \\beta$ and $\\hat \\theta$ fixed).\n\n$\\hat a = \\arg\\max_a \\sum_{l\\in [n], i\\in[m]} X_{li} \\log\\frac{1}{1+\\exp(-(\\hat\\theta_l + a - \\hat\\beta_i))} + (1-X_{li}) \\log\\frac{1}{1+\\exp(-(\\hat\\beta_i - \\hat\\theta_l - a))}$\n\nThis is a concave maximization problem in a single scalar variable and can be solved efficiently.", " We hope that our main reply above shows how the spectral algorithm can be extended to more expressive IRT models such as the mixed Rasch model.\n\nAs for your specific comments about the 2PL and 3PL model. The spectral algorithm is derived based on certain properties that are characteristic of the 1PL model. Therefore, we are unsure what an extension to the 2PL and 3PL model would look like. The idea of a factored Markov chain sounds interesting but we're not familiar with the topic so we can't say much here.\n\n> Potential negative social impacts ...\n\nThis is a very good point. We will certainly include a more thoughtful discussion of this in the final version of the paper.\n\n> Minor points on presentation ...\n\nThanks for pointing these out, especially our claims on item parameter estimation. Perhaps a better way of phrasing this would be \"one-sided parameter estimation\", whether it is estimating the students' abilities or the items' parameters in the context of recommendation systems.\n\n> The authors cite several other spectral methods but do not make clear if any of the theoretical results come from those papers. Is Proposition 2.1 specific to the current paper or was it proved in one of the earlier works?\n\nThis proposition and its proof are new. This is because the pairwise transition probabilities of the Markov chain in our algorithm are different from those in Rank Centrality.\n\n> For Theorem 3.3 (where m grows) the authors need to describe to this audience why JMLE fails in this case and what is different about their algorithm that makes it succeed.\n\nWe're actually not claiming that JMLE fails in this regime and this theorem only refers to the spectral method (the punchline is that when m grows, the spectral algorithm enjoys better, in fact, optimal error rate). However, it has been shown that JMLE fails when the number of items m is a constant whereas our algorithm is still consistent under that regime (Theorem 3.1).\n\n> related works and how our algorithm differs\n\nYou raised very good points about the related work section and we will incorporate the suggested changes in a final version. We also hope to clarify the connection between our algorithm and Rank Centrality, the closest spectral algorithm in the literature. Superficially, they seem similar as both construct a Markov chain and estimate the parameters from the stationary distribution. However, the construction of the Markov chains, the pairwise transition probabilities are different. The resulting analysis is also different for our algorithm as we have a different sampling model from the Erdos-Reyni graph sampling model in the Rank Centrality paper. Furthermore, the BTL model where Rank Centrality is applied to has 1 set of parameters whereas the Rasch model has two. This difference plays out in Theorem 3.1 and Theorem 3.3 where we have two different guarantees for two regimes of m. Here the message is that the relative size of m and n affects the estimation error. Similarly, the theorems and propositions in our paper are new and not simple extensions of the results for Rank Centrality.", " In addition to our main reply above, we hope to address some of your specific comments here.\n\n> \"Theorems 3.5 and 3.6 appear to be missing something ... \"\n\nThis is a Cramer-Rao lower bound so the estimator $T$ is assumed to be an unbiased estimator. This estimator has to output the true parameter given infinite data for any $\\beta^*$. If it always outputs $T(X) = 0$ then it's no longer an unbiased estimator. The expectation is over $X$.\n\n> \"existence and uniqueness of the estimator, i.e., of the stationary distribution of (3)\"\n\nThis is a good point. The existence and uniqueness of the stationary distribution can be guaranteed by making sure that the Markov chain is ergodic. Roughly speaking, this means that from every state, one can land at another state (after some time) with a positive probability. This can be guaranteed using our regularization scheme which ensures that the pairwise transition probabilities are always positive and that the graph underlying the Markov chain is connected. If accepted, we will emphasize this point in the final version of the paper.\n\n> \"The similarities and differences to existing pairwise & spectral approaches\"\n\nThis is a good suggestion as well. The main reason we don't discuss this in greater depth is because of the lack of space in the paper. If accepted, we should be able to discuss the connection to other pairwise methods in the final version which allows for more space.\n\n> Typos and references\n \nThank you for pointing out these details. We'll surely address them in the final version.\n\n> Why is the MMLE so bad in the Top-K case, despite achieving good AUC and log-likelihood? (I don't understand the caption of Table 1).\n\nThis seems to be a point of confusion for reviewer 4 (id: L9zk) as well. Please take a look at our reply to reviewer 4 where we clarify our setup and provide more explanations as to why the spectral method tends to outperform other methods in terms of top-K accuracy while AUC and log-likelihood tend to be similar among the methods.\n\n> Can you comment to what extent the proofs of Thms 3.1 and 3.2. mirror the proofs in Rank Centrality paper for the BTL model?\n\nIt is reasonable to see a lot of similarities between our algorithm and Rank Centrality as both construct a Markov chain and computes its stationary distribution to recover parameter estimate. As a starting point, the analyses of the two algorithms use similar tools (such as Lemma A.3 in our paper) to bound the error of the stationary distribution of a perturbed Markov chain. However, the pairwise transition probabilities in the two algorithms are defined differently. Our sampling model is also different from the Erdos-Reyni sampling model for the comparison graph in the analysis of Rank Centrality. Furthermore, the Rasch model has two sets of parameters while the BTL model has one. Here, we see this difference plays out as we have two separate results for two regimes of m -- when m is a constant or small relatively to n vs when m also grows with n. ", " We thank the reviewers for all of their comments. We’re glad to see that the reviewers appreciate the novel application of the spectral algorithm to the Rasch model and the theoretical guarantees obtained in our work. We will address here the main comments from the reviewers, mostly pertaining to our experiments and extensions of the spectral method. We will also address more specific comments from each reviewer individually.\n\n## Extensions of the spectral algorithm\n\nAs our work proposes a new approach to IRT based on spectral methods, extensions of the algorithm to more expressive IRT models are certainly promising research directions.\n\nReviewer 3 asked how the spectral algorithm can be extended to perform joint estimation of both user and item parameters. We can apply the spectral algorithm to obtain the item parameter $\\hat\\beta$. Each individual user parameter can be obtained by maximizing the log-likelihood function\n$\\hat \\theta_l = \\arg\\max_{\\theta} \\sum_{i} X_{li}\\log\\frac{1}{1+\\exp(-(\\theta_l - \\hat\\beta_i))} + (1-X_{li})\\log\\frac{1}{1+\\exp(-(\\hat\\beta_i-\\theta_l))}.$\nThis is a concave optimization problem in $\\theta_l$ and can be solved efficiently.\n\nReviewers 2 and 3 asked how the spectral algorithm can be extended to model expressive IRT models. As an example, we can extend the spectral algorithm to account for population heterogeneity under the mixed Rasch model [1]. The mixed Rasch model is similar to the Multivariate IRT model in that it aims to capture the idea pointed out by reviewer 2: a student might be good at one subject (arithmetic) while being bad at another (literature). In terms of modeling, each of these subjects corresponds to a mixture component. Both the student abilities and the test difficulties differ from one mixture component to another.\n\nA mixture learning algorithm can proceed as follows:\n* Perform spectral clustering on the rows of the response matrix $X$ (such as via sparse PCA) to produce $K$ clusters of rows.\n* Run the spectral algorithm on these K clusters to obtain the K set of parameters.\n* The value of $K$ can be chosen using cross-validation on a held-out validation set.\n\nGiven that the spectral algorithm runs significantly faster than other estimation algorithms, we can see its utility in learning mixtures of Rasch models where $K$ is large. The learned parameter estimates can be used to study the different subpopulations of students and how the test difficulties vary among these subpopulations.\n\n[1] J Rost, C Carstensen, M Von Davier. Applying the Mixed Rasch Model to Personality Questionnaires (1997).\n\n## Comparisons to a Bayesian method\n\nWe have performed some additional experiments with a 1PL Bayesian estimation method. Its implementation can be found at https://github.com/nd-ball/py-irt, based on a recent paper [2]. The Bayesian algorithm uses a hierarchical prior and variational inference for parameter estimation. Given that the paper [2] is well cited, we believe that this is a strong baseline. \n\nThe message is similar to the experiment results in our main paper. The spectral algorithm is competitive in terms of accuracy. However, it is significantly more efficient than the Bayesian method. Furthermore, like other methods shown in our experiments, the Bayesian method doesn’t perform as well as the spectral method in terms of top-K ranking. The results imply that the spectral method will enjoy more practical advantages in applications where accurate top-K ranking is desired such as recommendation systems, approval voting systems, crowdsourcing, etc.\n\n```\n| Dataset | AUC (Bayesian) | AUC (Spectral) |\n|:--------:|:-----------------:|:-----------------:|\n| LSAT | 0.706 | 0.707 |\n| 3 Grades | 0.532 | 0.532 |\n| UCI | 0.565 | 0.565 |\n| ML-100K | 0.681 | 0.662 |\n|:--------:|:-----------------:|:-----------------:|\n| | LogLik (Bayesian) | Loglik (Spectral) |\n| LSAT | -0.487 | -0.487 |\n| 3 Grades | -0.681 | -0.687 |\n| UCI | -0.693 | -0.706 |\n| ML-100K | -0.635 | -0.646 |\n|:--------:|:-----------------:|:-----------------:|\n| | Top-K (Bayesian) | Top-K (Spectral) |\n| ML-100K | 0; 0; 0.04; | 0.4; 0.6; 0.54 |\n|:--------:|:-----------------:|:-----------------:|\n| | Time (Bayesian) | Time (Spectral) |\n| LSAT | 63 | 0.028 |\n| 3 Grades | 27 | 0.015 |\n| UCI | 26 | 0.021 |\n| ML-100K | 2.8k | 2 |\n```\n\n[2] Natesan et al., Bayesian Prior Choice in IRT Estimation Using MCMC and Variational Bayes", " This paper develops a spectral algorithm for estimating the item parameters in the Rasch item response model.\n\nThe authors show that the proposed estimator is consistent, and they provide finite-sample error bounds that are (near-)optimal.\n\nEmpirically, the authors show that their approach performs favorably in comparison to existing estimators: Predictive performance is on par or better, and the algorithms typically runs faster than competing approaches. Strengths:\n\n- The Rasch model is widely used, and the problem addressed by this paper is very relevant to the NeurIPS community.\n- The theoretical contributions are comprehensive and address a number of important questions that remain mostly open for other estimators for the Rasch model: consistency, finite-sample error, optimality.\n- In practice, the algorithm is simple to understand and to implement, and performs favorably. As such, the paper's contributions also relevant to practitioners.\n- Beyond the specific results presented by the author, this paper also provides a bridge between the literature on spectral algorithms for the Bradley-Terry model and the Rasch model. I expect this paper to unlock further research on the topic.\n- The experimental evaluation is excellent. The illustration of failure cases (Fig. 1) for various estimators is helpful in understanding the trade-offs that different methods have. The real-data experiments are thorough, and also include datasets and metrics where there is not necessarily a clear advantage for their proposal, giving a balanced picture of the empirical benefits.\n- The paper is well-written and easy to follow. Throughout the paper, the authors strike a good balance between building intuition and formalizing their claims.\n\nAltogether, the combination of these strengths make for a significant contribution that will have impact on ML researchers and practioners.\n\nWeaknesses\n\n- Theorems 3.5 and 3.6 appear to be missing something. Clearly setting $\\beta^* = 0$ and $T(X) = 0$ achieves zero error (and satisfies the unbiasedness assumption). Perhaps $\\beta^*$ needs to be bound, e.g., by taking the supremum over a certain range. Similarly, it is not clear what the expectation is over (I read it to be over $X$ but this could be made explicit).\n- There is no discussion about the existence and uniqueness of the estimator, i.e., of the stationary distribution of (3). This needs to be formalized more rigorously. Under what conditions on `X` does the spectral estimate exist? How do Algorithms 1 & 2 handle cases where this condition is not satisfied?\n- The similarities and differences to existing pairwise & spectral approaches for the Rasch model should be emphasized in the main text. Reading through Appendix D, it is apparent that the differences are major, but this was not clear in the main text (beyond the brief comment in the introduction about dense vs. sparse matrices). I strongly suggest making use of any additional space to move parts of Appendix D to the main text. Finally, statements like that of line 135, \"our algorithm instantiates the general spectral approach [...]\" are slightly misleading, given prior work.\n\nThese weaknesses are relatively minor and I believe they can be addressed in a camera-ready version.\n\nSmall comments:\n\n- I am surprised that Agarwal et al. [2] is not cited in Section 4, \"accelerating the spectral algorithm\".\n- Paragraph starting on line 50: an explicit expression for the CMLE would be helpful; it is hard to understand what the \"likelihood conditioned on $\\{ s_l \\} $\" looks like.\n- l. 60: later -> latter\n- l.116 and 204: practicioners -> practitioners - Why is the MMLE so bad in the Top-K case, depsite achieving good AUC and log-likelihood? (I don't understand the caption of Table 1).\n- Can you comment to what extent the proofs of Thms 3.1 and 3.2. mirror the proofs in Rank Centrality paper for the BTL model? Limitations are adequately addressed.", " UPDATE: I increased my score based on the authors' response for noted in my counter-response.\n\nThe paper presents a novel representation of the statistical correlation between items in a 1PL (Rasch) IRT model that allows faster and (empirically) more accurate estimation of the underlying item difficulties (beta). By embedding the items in a Markov Chain and calculating its stationary distribution, the authors are able to prove finite sample complexity bounds, which I believe is a first for such a method. They provide extensions that speed up learning by using different normalization constants for each item and also add a regularization term to handle sparsity in real datasets. Their empirical results show either parity or improvement against existing methods with an order-of-magnitude speedup against the most-accurate competitor. The paper provides a very thorough analysis of the spectral approach in the Rasch model case. I have some questions below about the differentiation to other spectral methods and some terms and definitions but overall those are minor and I think the analysis in the Rasch case is first rate. The theorems look correct to my reading and exploring the connection between a MC's stationary distribution and the overall item difficulty parameters is a clever choice. I also liked the real-world practitioner extensions with variable normalization and regularization which make the algorithm truly applicable to the always-way-to-sparse item response data.\n\n However, I was disappointed that the authors did not mention or attempt to extend their approach to more modern IRT models including 3PL (where items have 3 axes of rating, not just difficulty) or multivariate IRT (where there are multiple thetas, for instance a student’s ability adding fractions and their ability to calculate percentages). These more modern models are often calibrated using machine learning techniques (see below) and have far more intense computational burdens than Rasch so it is disappointing to not see at least a proposed extension to those cases.\n\nMore specifically, if we move to just 2-PL, how would the edges in the spectral embedding (the probability of transitions) represent both item parameters? Could this be done using a factored Markov chain or some other more complicated graphical model? Or is the spectral approach limited to only a single item-difficulty parameter? \n\nIn the case of MIRT, we have seen recent successes of machine learning methods that use pairwise item comparisons to advance the very difficult problem of calibrating IRT models, see:\n“Multidimensional Item Response Theory in the Style of Collaborative Filtering” (https://link.springer.com/article/10.1007/s11336-021-09788-9) as an example. Since computational burdens are much larger with MIRT I would like to know if the computational advances in the Rasch have repercussions in that space?\n\nReturning to the Rasch case and the results in the paper, I found the spectral embedding clever and a unique statistical approach for solving this problem. I very much appreciate the finite-sample complexity bounds in the paper. There are, however, are several places where I would like to see clarifications:\n\nThe authors cite several other spectral methods but do not make clear if any of the theoretical results come from those papers. Is Proposition 2.1 specific to the current paper or was it proved in one of the earlier works?\n\nFor Theorem 3.3 (where m grows) the authors need to describe to this audience why JMLE fails in this case and what is different about their algorithm that makes it succeed.\n\nTable 1 needs to be reformatted to fit the margins. Also, the authors should make a bigger deal about the speedup compared to MMLE. It’s not just faster, the new method is an order of magnitude quicker.\n\nI found Section 6 unhelpful and it seemed to be trying to justify why this paper fits at NeurIps by describing tangentially related problems. I know there are connections to general logistic regression problems here but this section would be better used describing more complex IRT models like MIRT and how the approach could be used there. In terms of fit at Neurips, I think the paper is ok as a statistical learning theory approach that models a psychological phenomenon although ultimately this result belongs in a Psychometrics journal to be really picked up by practitioners.\n \nMinor points:\n\nLines 29 and 30: citations needed for these claims\n\nLine 31: “Traditionally, the goal of estimation under the Rasch model is to recover the item parameters” … No, the goal is to assess the mastery parameters theta – at the end of the day we want to know whether students have certain skills or not. Calculating beta values are a means to that end but they are not the goal.\n From the review, how does the new approach expand our capabilities in the 2PL/3PL or MIRT case? \n\nThe authors cite several other spectral methods but do not make clear if any of the theoretical results come from those papers. Is Proposition 2.1 specific to the current paper or was it proved in one of the earlier works?\n\nFor Theorem 3.3 (where m grows) the authors need to describe to this audience why JMLE fails in this case and what is different about their algorithm that makes it succeed.\n\nIn accordance with Neurips guidelines on AI ethics, the authors should provide in their response a more detailed description of potential biases in this method compared to existing IRT approaches and clearly spell out possible negative societal impacts. Beyond the missing extensions above I do think a discussion of ethics is necessary for this paper to be published, given Neurips's dedication to the topic. I understand the work here is purely theoretical so I didn't flag it for ethics review below, but the application domain for this IRT method is evaluating children. So any systematic errors or biases in the new approach need to be spelled out much more clearly. The authors should provide in their response a more detailed description of potential biases in this method compared to existing IRT approaches and clearly spell out possible negative societal impacts.", " The paper proposes an inference algorithm based on a spectral estimation approach for estimation of the item parameter in the IRT model with finite-sample guarantees. The authors provide detailed convergence analyses and demonstrate the utility of the proposed estimator via both simulated and real-world datasets, on which the proposed approach achieves performance comparable to other estimators (e.g., MLE) at a fraction of the computational cost. The paper makes interesting theoretical contributions to the inference algorithm for IRT model. The algorithm is simple to implement and has finite-sample guarantees. The analyses are sound and the experiments corroborate with the analytical results. The proposed method is also competitive with other inference algorithms. I do not find major weaknesses; some suggestions are outlined in the \"limitations\" part.\n\n Will and how does the algorithm generalize to settings in which student/user parameters need be estimated instead of sampled from a prior? It seems that the proposed approach is applicable to only 1PL IRT model (Rasch model). Although still widely used, there are other, potentially more power, variants of the IRT models (e.g., 2 or 3 PL IRT models) and it is not quite clear whether or how the proposed algorithm generalizes to other IRT variants. This is a potential limitation, although not a main concern.\n\nThe proposed approach relies on prior distribution on the student parameter, i.e., the student parameters are not estimated. For applications where only item parameter estimations are needed, the proposed approach might be sufficient. In other, also common, settings where the student parameters also need be estimated, it is unclear how the proposed approach would be applicable.", " This paper proposes a spectral estimation algorithm for item parameters in the 1PL item response (aka Rasch) model. The authors conducted theoretical analyses of the proposed algorithm in several scaling regimes, proposes a variant of their algorithm under practical constraints, and conducted experiments on real data to show that their method is on-par with existing algorithms in terms of estimation accuracy while being more computationally efficient. Strengths: \n- Although spectral estimation has been thoroughly studied, its application to the Rasch model is novel\n- I liked the fact that the authors provided a version of their algorithm in practice where one can use a different normalization factor d for every item. This is important and likely helps a lot in terms of Markov chain convergence - a uniform d can be really bad for some items on real world datasets\n- I have not checked the proofs but the theoretical results and discussions seem sound\n\nWeaknesses:\n- ML is only one family of estimation methods but there are others, say posterior mean etc. I get that the proposed methods may be better than M/J/CMLE, which have their own problems for sure, but how would its accuracy/runtime compares to Bayesian methods is not clear. See below. \n- The experimental settings and results are not clear. See below. \n- The presentation could be improved in several places. See below. \n 1. Main concern: experiments. The experimental settings and results (on real data) are really underdiscussed, even in the supplementary material. The overall takeaway is a solid one - the proposed spectral algorithm is on-par with existing algorithms in terms of accuracy and is much faster, which is perfectly fine. However, some details give me a sense that some baselines are not properly treated. Some things that the authors can do to help are:\n- at least discuss what Bayesian estimation methods would do; more accurate when the prior is right? Too inefficient on large real world datasets? A simulation study where you know the real item parameter prior or experiments on one real world dataset would certainly help. \n- many datasets are not discussed (LSAT, UCI, 3GRADES) where the number of items is very very small. Here, AUC is the same for all methods. The experimental results are not thoroughly discussed; I'm a little unsure about the huge variation in terms of ranking performance since after all, these are different algorithms for the same underlying statistical model. The 0.0 results for top-K accuracy for some methods are a bit suspicious; it's not totally surprising, but more discussion is definitely needed. Otherwise, it's just really hard to properly evaluate how beneficial the proposed algorithm is in practice. \n- AUC scores for most methods are pretty close on most datasets, which makes perfect sense. However, the proposed method is obviously worse on BX; why?\n\n2. Some minor aspects of the presentation could be improved:\n- in the abstract, it's probably more accurate to call the proposed algorithm \"item parameter estimation\" instead of \"item estimation\"\n- line 32 \"educational testing\"\n- line 37JMLE does not really have to use an alternating maximization algorithm, right? The problem is jointly convex in the item and ability parameters\n- I would like to see Theorem 3.1 and 3.3 written with ||beta - beta*||_2/sqrt(m) on the left hand side - this quantity makes more sense in practice\n- Need to zoom into the small n regions in Figure 1. Lines are overalpping at times and it's hard to tell which is which. This field is not really applicable to this paper" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "3qTQmnxzSWh", "dnmnq8Tyykz", "kQYktGNVG-r", "iCZm8ZowkZR", "czKBB9agFID", "QyiB-XjL4pi", "cvCM9-W1LV3", "nips_2022_1ItkxrZP0rg", "nips_2022_1ItkxrZP0rg", "nips_2022_1ItkxrZP0rg", "nips_2022_1ItkxrZP0rg", "nips_2022_1ItkxrZP0rg" ]
nips_2022_1pHC-yZfaTK
Regret Bounds for Information-Directed Reinforcement Learning
Information-directed sampling (IDS) has revealed its potential as a data-efficient algorithm for reinforcement learning (RL). However, theoretical understanding of IDS for Markov Decision Processes (MDPs) is still limited. We develop novel information-theoretic tools to bound the information ratio and cumulative information gain about the learning target. Our theoretical results shed light on the importance of choosing the learning target such that the practitioners can balance the computation and regret bounds. As a consequence, we derive prior-free Bayesian regret bounds for vanilla-IDS which learns the whole environment under tabular finite-horizon MDPs. In addition, we propose a computationally-efficient regularized-IDS that maximizes an additive form rather than the ratio form and show that it enjoys the same regret bound as vanilla-IDS. With the aid of rate-distortion theory, we improve the regret bound by learning a surrogate, less informative environment. Furthermore, we extend our analysis to linear MDPs and prove similar regret bounds for Thompson sampling as a by-product.
Accept
This paper has been well-received by the reviewers already in the initial round, and the reviewers were all happy with the authors' responses. The updates already made to the manuscript clearly showed the commitment of the authors to take all the reviewers' comments into account for the final version. After some discussion, all reviewers agreed that the paper should be accepted for publication at NeurIPS 2022. I encourage the authors to finalize the promised changes for the camera-ready version, and in particular complete the preliminary experimental section provided in the revision.
train
[ "n62OaB0bG3q", "mSrDnc2lJxv", "aUPGN2EP48g", "0HY4amrm3sx", "hd0e1rWmYWA", "C71a-jPvH6I1", "P7t852WOBEp", "1UYlzOpEEIm", "qk3eLccl7wV", "CQo_Bb8d4rH", "XRsSRMb0mmo", "XSxjFseQzna", "vZf8g-tCFoW", "jg0D8CTuM6F", "pOBAWa5g96L", "Tpn0-rrxq6w" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for addressing my comments. ", " I appreciate the authors's response, which address all of my questions. Sorry for the late reply. And I am actually satisfied with the rebuttal and especially the newly added algorithm and its implementaton. I have raised my score for this work. ", " Thanks a lot for raising the score! ", " Thanks a lot for appreciating our contribution! ", " Dear reviewer,\n\nPlease let us know if you have further questions about our response. Thanks a lot.", " Thanks for the response. I raise my score correspondingly.", " Thank you for the detailed rebuttal. I do agree that your work is the first to extend IDS to MDPs. My concern is, \"are any of the conceptual steps required to do so that challenging?\" For example, the tensorization of the KL divergence across time steps, which seems to be at the heart of Lemma A.1, seems to have been leveraged in frequentist settings, see e.g. : https://arxiv.org/abs/1806.00775\n\nI am not saying no novelty is present. I am just unsure about the extent to which the novelty and challenge makes me overly enthusiastic about acceptance. However, I do think that the simplicity of the approach is a merit, and the writing makes me view the work favorably. I'll raise my score to a 6.", " Dear reviewers,\n\nThanks a lot for your valuable reviews and suggestion! We have revised our draft according to your comments and added a detailed implementation and some preliminary experiments for regularized-IDS (Appendix A). \n\nBest,\n\nAuthors", " Thanks a lot for acknowledging our contribution! We would like to respond to comments point by point.\n\n1. **“rigorousness in the discussion related to Γ∗in Sec. 3.1 and related proof.”**\n\nThanks a lot for the careful reading! We completely agree with your argument. We feel there is a typo here. In Line 482 of the proof of Lemma 3.2, the argument should be \n$$\n\\Gamma_*\\leq \\max_{\\ell\\in[L]} \\Gamma_\\ell(\\pi\\_{TS}^\\ell)\n$$\nsince the bound of $\\Gamma_\\ell$ is independent of $\\ell$. We hope this could clarify your question.\n", " Thanks for your thoughtful review. We would like to respond to comments point by point.\n\n1. **“However, it does seem that the techniques and arguments are rather standard, and I think it would be useful for the authors do explain not just the results derived in prior works, but to give a sense of how common (or unique) their techniques are in comparison to the rest of the Bayesian regret community.”**\n\nThanks for your suggestion! In literature, there are two ways to prove Bayesian regret bounds:\n- The first one is to introduce confidence sets. But this **cannot** be used to analyze IDS to the best of our knowledge. \n- The second one is to use information-theoretical analysis but almost all the analysis is limited to bandits setting (Russo and Ben, 2014). Extending such techniques to the MDP case is highly non-trivial since we need to model the randomness from the transition dynamic. As commented by Reviewer uxAT, although similar techniques appear in different literature, our work is **the first one** to use information-theoretic analysis to analyze Bayesian regret in MDPs. \n\nWe also would like to highlight one of key our technical contributions is Lemma A.1. \n\n**Lemma A.1.** For environment $\\mathcal E$ and its corresponding mean of posterior measure $\\bar {\\mathcal E}\\_\\ell$, the following holds for any policy $\\pi$,\n\\begin{equation*}\n \\sum_{h=1}^H \\mathbb E_{\\ell}\\mathbb E_{\\pi}^{\\bar{\\mathcal E}\\_\\ell}\\left[D_{KL}\\left(P_h^{\\mathcal E}(\\cdot|s_h^\\ell,a_h^\\ell)||P_h^{\\bar{\\mathcal E}\\_\\ell}(\\cdot|s_h^\\ell,a_h^\\ell)\\right)\\right]= \\mathbb I_\\ell^{\\pi}\\left(\\mathcal E; \\mathcal \n H_{\\ell, H}\\right).\n\\end{equation*}\n\nBy exploiting the property of independent priors, we can relate the mutual information with the KL w.r.t the mean of posterior measure. We believe this is **the first of this kind** of results in literature and this lemma is critical to bound the Bayesian regret of IDS and to derive the computational-efficient version (regularized-IDS).\n\n2. **This makes me wonder - either is (a) the analysis loose, or (b) can one derive lower bounds to show that IDS (without modification) necessarily suffers this worse sampling complexity?”**\n\nThanks for your question. For vanilla IDS (Section 3.1) where the agent needs to learn exactly the whole transition dynamic, we conjecture that this algorithm cannot achieve optimal regret bound in the tabular case since learning every part of the whole dynamic is redundant to learn the optimal policy. \n\nFor surrogate-IDS, we believe its regret bound could be tightened and should be optimal. At this moment, the loose part comes from the bound of information-ratio. When the prior is specified to Dirichlet prior, Lu and Van Roy (2019) has shown that the upper bound of information-ratio can be independent of S (in contrast, our current upper bound is SA but holds for any prior) which leads to an optimal regret bound in our case. It will be an important future work to tighten the upper bound of the information-ratio for any prior.\n\n3. **“Another weakness is that the sharper guarantees required computing an explicit cover, which is computationally prohibitive. I would have been more excited if the refined regret were attainable with computationally efficient algorithms.”**\n\nThis is a very good question and we do not have an immediate answer here. Another possible way is to directly learn the optimal value function or optimal policy rather than computing an explicit cover for the environment. We believe this is an interesting future work.\n\n", " Thanks for your thoughtful review! We would like to respond to comments point by point.\n\n1. **“While I appreciate the comparison of the regret bound with other methods in the literature, I do not think it is proper to compare the Bayesian regret bound derived in this paper with the frequentist regret bound in other papers. Special comments should be carefully made around any of these remarks.”**\n\nThanks for your suggestion! We completely agree on this. In the revision, when comparing the regret upper bounds, we have explicitly mentioned if the upper bound is Bayesian regret or frequentist regret and added the comments that this is not an apples-to-apples comparison.\n\n2. **“Since S,A and rh are assumed to be known and deterministic, the expectation over the environment E is just the expectation over the prior of the transition probabilities?”**\n\nYes, this is true. The extension to unknown and stochastic reward functions is straightforward. \n\n3. **“Line 51: it would be more convincing to include some cases where the best UCB-type algorithms are sub-optimal.”**\n\nThanks for the suggestion. In Section 4 of the work “Information Directed Sampling for Sparse Linear Bandits, NeurIPS 2021”, the author has proven that any UCB-type algorithm could be sub-optimal in terms of minimax regret for sparse linear bandits. We have included this case in the revision.\n\n4. **“One potential drawback of the paper is its lack of a specific example for calculating the information ratio and the sample complexity for estimating it, which makes it hard to understand the advantage of using IDS policy in practice.”**\n\nThank you very much for your comments! In the revision, we have included a detailed algorithm box for implementing regualized-IDS for tabular MDPs in Appendix A.\n\nIn practice, it is usually expensive to directly calculate the KL-distance. Following the idea from Russo and Ben (2018) (Learning to Optimize via Information-Directed Sampling), we could lower bound the KL-distance by the variance as follows. By Pinsker's inequality, \n\n\\begin{equation}\n\\begin{split}\n \\int D_{KL}\\left(P_h^{\\mathcal E}(\\cdot|s,a)||P_{h}^{\\bar{\\mathcal E}_\\ell}(\\cdot|s,a)\\right)d \\mathbb P_\\ell(\\mathcal E)&\\geq \\int \\left\\|P_h^{\\mathcal E}(\\cdot|s,a)-P_h^{\\bar{\\mathcal E}_\\ell}(\\cdot|s,a)\\right\\|_2^2 d \\mathbb P_\\ell(\\mathcal E)\\\\\\\\\n&\\geq \\int\\sum\\_{s'} \\left(P_h^{\\mathcal E}(s'|s,a)-P_h^{\\bar{\\mathcal E}_\\ell}(s'|s,a)\\right)^2 d \\mathbb P_\\ell(\\mathcal E)\\\\\\\\\n&=\\sum\\_{s'}\\text{Var}\\left(P_h^{\\mathcal E}(s'|s,a)\\right).\n\\end{split}\n\\end{equation}\n\nThen the augmented reward function in terms of variance terms is:\n \\begin{equation}\n r'_h(s,a) = r_h(s,a)+\\lambda\\sum\\_{s'}\\text{Var}\\left(P_h^{\\mathcal E}(s'|s,a)\\right).\n \\end{equation}\nWith independent Dirichlet prior, both $\\text{Var}\\left(P_h^{\\mathcal E}(s'|s,a)\\right)$ and $P_h^{\\bar{\\mathcal E}_\\ell}(\\cdot|s,a)$ have the closed form. We can also prove that this version of regularized-IDS enjoys the same regret bound.\n\n**In Appendix A of the revision**, we conducted preliminary experiments using the RiverSwim and stochastic chain MDP environment and compared the empirical performance of posterior sampling for reinforcement learning (PSRL) and regularized-IDS. Both algorithms use the same Dirichlet priors. We use the theoretical suggested regularization parameter which is chosen as $\\lambda\\times\\sqrt{L}$, where $L$ is the number of total episodes. For a proper choice of tuning parameters $\\lambda$, regularized-IDS can outperform PSRL as illustrated below (cumulative empirical regrets over 10K episodes). This confirms the advantage of using IDS policy in practice.\n\n| Env | Regualized IDS, lambda=0.1 | Regualized IDS, lambda=0.5 | PSRL |\n| ----------- | ----------- | ----------- | ----------- |\n| RiverSwim | 136.54 | 369.23 | 229.67\n| Chain MDP | 35.97 | 66.55 | 73.71\n\n\n5. **“Line 177: conditionar => conditional”**\n\nThanks! We have corrected this in the revision.\n", " Thanks for your thoughtful review! We would like to respond to comments point by point.\n\n1. **“Definition of \\bar{\\Epsilon}_l is recursive”**\n\nThanks for pointing this out! We have modified the definition to \n\n\"We define $\\bar{\\mathcal E}_{\\ell}$ as the mean MDP where for each state-action pair (s,a), $P\\_{h}^{\\bar{\\mathcal E}\\_{\\ell}}(\\cdot|s,a)=\\mathbb E_\\ell[P_h^{\\mathcal E}(\\cdot|s,a)]$ is the mean of posterior measure.\"\n\n2. **“zeta in the proof of B.1 is undefined”**\n\n\nThanks for asking. We have defined $\\zeta$ in Line 239 in the main paper:\n\n“Let $\\zeta$ be a discrete random variable taking values in $\\{1, . . . , K\\}$ that indicates the region $\\mathcal E$ lies such that $\\zeta=k$ if and only if $\\mathcal E\\in\\Theta_k$.“ \n\nWe have restated the definition in the proof of B.1 in the revision.\n\n3. **“the way of defining \\pi_{TS}^l is confusing as this policy is only used in the proof and no presented algorithms use it to compute the actual policy.”**\n\n\nWe would like to mention that we also present the Bayesian regret bound for $\\pi_{TS}^\\ell$ in Section 4.4.\n\n4. **“for the clarity, it has to be mentioned in the preliminaries that the prior is assumed to be known to the learner”**\n\nThanks! We have mentioned this explicitly in the revision. \n\n5. **“Partition should depend on \\epsilon”**\n\nThanks! We have made the partition depending on $\\epsilon$ in the revision.\n\n6. **“In equation (3.3), $\\pi$ is missing from $I_{\\ell}$”**\n\nThanks! We have added $\\pi$ in the revision.\n", " This paper provides the first information-directed sampling (IDS) algorithm with theoretical guarantees for the learning in episodic MDP in the prior free setting. The assumptions are the following: the reward function is deterministic and known to the learner and the transition probabilities in the MDP are unknown and are sampled from a known prior distribution before the first episode begins. Authors considers two types of setups: all presented algorithms works in a tabular setting and some results also extended to work in the linear MDPs. The performance of the learner is measured by a Bayesian regret. This works adapts the idea of IDS to the setting of learning in the MDP and the authors present three algorithms to tackle this problem. For the first algorithm proposed, Vanilla IDS, the idea is to introduce a notion of the “environment”, which hides in it all the randomness of the unknown parameters of MDP’s transitions and define the information ratio for a policy \\pi as the ratio between the square of expected difference of the value function of optimal policy and the value function of policy \\pi, divided by the information gain of the “environment” variable and a the history of episode ℓ up to layer h produced by a policy \\pi, all conditioned on the history. To find a \\pi, that achieves the minimum information ratio, the learned has to optimize over the full policy space, which is a computationally costly. The analysis is simple and borrows the tricks from the literature, as the decomposition of regret based on the marginal posterior distribution of “environment” (line 145) and a trick with the ratio of occupancy measures (Lemma D.3), but all together it gives the first regret bound of this kind. \nNext, authors propose Regularized-IDS algorithm, where instead of computing the ratio, authors propose to compute the sum of the arguments of Vanilla IDS. The result of this chapter is that the \n Regularized-IDS can be efficiently computed using the samples from the posterior which gives the augmented MDP and has the same regret bound as Vanilla IDS. \nFinally, the author improve the regret bound of Regularized-IDS and Vanilla IDS, which they show can be achievable by Surrogate-IDS algorithm. The idea of this algorithm is to construct a surrogate environment, which would be an \\epsilon approximation of the true “environment” variable and then compute the information ratio which would be computed over this approximated environment. This algorithm is not computationally efficient, but it improves the dependence of the regret bound on S. Also, the discretisation approach allows to extend the results obtained for episodic MDP to linear MDP, as the number of the set in the partition of the environment space does grow as the covering number of the bounded set in R^d. \n\n\nI find it especially interesting how similar techniques works in the analysis of this paper and [Foster et.al 2021], since it give another evidence that the decision-estimated coefficient is related to the information ratio. \n In some places the definition of variables are omitted, please check it. \n- Definition of \\bar{\\Epsilon}_l is recursive\n- zeta in the proof of B.1 is undefined\n- the way of defining \\pi_{TS}^l is confusing as this policy is only used in the proof and no presented algorithms use it to compute the actual policy. \n\nMinor remarks\n\n- for the clarity, it has to be mentioned in the preliminaries that the prior is assumed to be known to the learner\n- Partition should depend on \\epsilon\n- In equation (3.3), \\pi is missing from I_{\\ell}\n The main limitation of the proposed algorithms is that they are not computationally efficient. ", " This paper studies information-directed sampling (IDS) for Markov decision processes. In particular, the authors prove the Bayesian regret bound of IDS for finite horizon tabular MDPs and linear MDPs. \n The writing of the paper is clear and the proofs seem to be sound. While I appreciate the comparison of the regret bound with other methods in the literature, I do not think it is proper to compare the Bayesian regret bound derived in this paper with the frequentist regret bound in other papers. Special comments should be carefully made around any of these remarks.\n Since $\\mathcal{S}, \\mathcal{A}$ and $r_h$ are assumed to be known and deterministic, the expectation over the environment $\\mathcal{E}$ is just the expectation over the prior of the transition probabilities?\n Line 51: it would be more convincing to include some cases where the best UCB-type algorithms are sub-optimal.\n\nOne potential drawback of the paper is its lack of a specific example for calculating the information ratio and the sample complexity for estimating it, which makes it hard to understand the advantage of using IDS policy in practice. \n\nLine 177: conditionar => conditional\n", " This paper presents general guarantees for information directed sampling in MDPs. As it stood, prior work had only understood Thompson-Sampling inspired approaches in frequentist settings, or provided bounds for specific priors, but this is the first work to analyze proper IDS for MDPs with no restrictions on the prior. Strengths: the bounds in this paper apply to general priors, some of the information bounds based on the method of mixtures may be of independent interest, a regularized variant of IDS can be implemented efficiently given access to a natural sampling oracle. In addition, the paper is generally well written and well explained, despite a couple of minor grammatical issues. Authors do a great job explaining what the essential ingredients are of their proofs.\n\nThe refinements due to rate distortion theory were also a nice addition.\n\nWeaknesses: I should preface this by saying that I am not an expert on Bayesian regret bounds; hence it is hard for me to gauge the technical contribution of this paper. However, it does seem that the techniques and arguments are rather standard, and I think it would be useful for the authors do explain not just the *results* derived in prior works, but to give a sense of how common (or unique) their techniques are in comparison to the rest of the Bayesian regret community. \n\nIn addition, it seems that the bounds here do not match what is attainable in the (harder) frequentist setting. This makes me wonder - either is (a) the analysis loose, or (b) can one derive lower bounds to show that IDS (without modification) necessarily suffers this worse sampling complexity? Even some numerical experiments demonstrating scaling with S would be illustrative here. \n\nAnother weakness is that the sharper guarantees required computing an explicit cover, which is computationally prohibitive. I would have been more excited if the refined regret were attainable with computationally efficient algorithms. Do the authors conjecture that the suboptimality of their regret is a limitation of the analysis, or the algorithm? To they have any analysis or experimental evidence to shed light on this? Moreover, have the authors thought about what a more computationally efficient algorithm which uses the MDP cover would look like? As noted, regret bounds are suboptimal, refined bounds are not computationally efficient. ", " In this paper, the authors studied the provable efficient Information-Directed Sampling (IDS) methods in MDP setting. They first proposed vanilla-IDS and then derived a prior-free Bayesian regret bound for it. After that, for the sake of computational efficiency, they proposed another variant called regularized-IDS. Besides, they improved the regret bound by learning a surrogate environment. Beyond the tabular setting, they also extended their results to linear MDP. ### Strengths\n\nThis paper has an important contribution to understanding IDS methods in the MDP setting. The algorithm and analysis look novel and interesting. The paper writing looks good to me.\n\n\n### Weakness\n\nI only have a small issue about the rigorousness in the discussion related to $\\Gamma^*$ in Sec. 3.1 and related proof.\n\nIt seems to me that $\\Gamma_l(\\pi^l_{IDS})$ is not a constant across $l=1,2,...,L$, while $\\Gamma^*$ is defined to be the worst-case information ratio and upper bounds $\\Gamma_l(\\pi^l_{IDS})$ for all $l\\in[L]$. As a result, there might exists some $\\bar{l},\\tilde{l} \\in [L]$, such that $\\Gamma^*$ is attained at $\\bar{l}$, but at $\\tilde{l}$, we have $\\Gamma_{\\tilde{l}}(\\pi^\\tilde{l}_{IDS}) < \\Gamma^*$.\n\nTherefore, although we always have $\\Gamma_l(\\pi^l_{IDS}) \\leq \\Gamma_l(\\pi^l_{TS})$, it is possible that $\\Gamma_l(\\pi^l_{IDS}) \\leq \\Gamma_l(\\pi^l_{TS}) < \\Gamma^*$ when $l=\\tilde{l}$. As a result, I think the argument $\\Gamma^* \\leq \\Gamma_l(\\pi^l_{TS})$ (Line 482 in the proof of Lem. 3.2) is not correct (but I guess one can recover the same regret upper bound without introducing $\\Gamma^*$ and therefore there will be no such issue).\n\nIf the authors can fix the issue I mentioned above, I would like to increase my score correspondingly. Please check the **Weakness** section above. N.A." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "XSxjFseQzna", "XRsSRMb0mmo", "C71a-jPvH6I1", "P7t852WOBEp", "jg0D8CTuM6F", "qk3eLccl7wV", "CQo_Bb8d4rH", "nips_2022_1pHC-yZfaTK", "Tpn0-rrxq6w", "pOBAWa5g96L", "jg0D8CTuM6F", "vZf8g-tCFoW", "nips_2022_1pHC-yZfaTK", "nips_2022_1pHC-yZfaTK", "nips_2022_1pHC-yZfaTK", "nips_2022_1pHC-yZfaTK" ]
nips_2022_VOPiHQUevh5
TUSK: Task-Agnostic Unsupervised Keypoints
Existing unsupervised methods for keypoint learning rely heavily on the assumption that a specific keypoint type (e.g. elbow, digit, abstract geometric shape) appears only once in an image. This greatly limits their applicability, as each instance must be isolated before applying the method—an issue that is never discussed or evaluated. We thus propose a novel method to learn Task-agnostic, UnSupervised Keypoints (TUSK) which can deal with multiple instances. To achieve this, instead of the commonly-used strategy of detecting multiple heatmaps, each dedicated to a specific keypoint type, we use a single heatmap for detection, and enable unsupervised learning of keypoint types through clustering. Specifically, we encode semantics into the keypoints by teaching them to reconstruct images from a sparse set of keypoints and their descriptors, where the descriptors are forced to form distinct clusters in feature space around learned prototypes. This makes our approach amenable to a wider range of tasks than any previous unsupervised keypoint method: we show experiments on multiple-instance detection and classification, object discovery, and landmark detection—all unsupervised—with performance on par with the state of the art, while also being able to deal with multiple instances.
Accept
The meta reviewer has carefully read the paper, reviews, rebuttals, and discussions. The authors did a good job in rebuttal. The additional results and clarifications addressed the reviewers' concerns. The manuscript crosses the acceptance bar. The authors are still suggested to revise the paper considering the reviewers' comments.
train
[ "oB5CtDBKonF", "HX84vGiyX0NS", "rg3jK_yMf00", "lqpGx88ocfs", "ijt_Of1f3O3j", "K8MlQNSrqWF", "E8IdISoyIto", "mlc9fsvKUDe" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As the author-reviewer discussion period will end on Tuesday, we would like to know if our response answered the reviewers' concerns regarding the paper. Please let us know if you have any further questions, and we will do our best to reply by tomorrow.", " We thank all reviewers for their input. We have addressed their questions below. Some replies required tables with results, which are listed inline, and figures, which have been uploaded to an external source while preserving anonymity (openreview does not support image uploads). \n\nPlease let us know if you have further questions during the reviewer-author discussion period.", " **Key concepts are well known, such as contrastive learning, clustering of feature representation, equivarience.**\n\nWe would like to emphasize that while the concepts that we utilize may exist in the literature, localized prototypes, especially ones that can support multiple instances, do not. It is non-trivial to build such a framework, and it is one of our main contributions.\n\nWe also note that our clustering loss based on the Sliced Wasserstein distance is novel and effective. As requested, we compared it against Vector Quantization (see first answer to reviewer tYeA) and k-means clustering with deep nets (see below), and both methods perform worse than ours.\n\n**Test on more complex datasets (PASCAL3D+, VehiclePart, KITTI)**\n\nWe would like to note that the SOTA in unsupervised landmark (keypoint) learning is not ready for in-the-wild deployment. This includes our approach, which already takes a **significant step in this direction by removing the single-instance assumption**.\n\nOne of the limitations of our method is that we do not model the background explicitly, so that it struggles to work well on more complex datasets. As shown by the baseline methods for object discovery and landmark detection, self-supervised approaches only work with relatively simple datasets at the current stage. We noticed one concurrent work [F] published in ICLR'22 that extends the slot attention work to more complex images by using additional motion cues. However, the datasets used in [F] are still from synthetic images. As future work, we are planning to explicitly model the background and incorporate motion cues to let our method generalize to more complex images.\n\n**Missing related works**\n\nThanks for bringing up these papers. We will add and discuss them.\n\n[A, B] are early work on self-supervised landmark detection, and test on aligned-CelebA, where images are cropped to align the faces to the center of the image. As mentioned in L239, recent landmark detection methods perform really well on aligned-CelebA: we followed [H] to compare against most recent works on the more challenging unaligned-CelebA. Note that **[G], one of the baseline methods in our paper outperforms [A, B] in aligned-CelebA dataset by a large margin,** so it seems safe to assume that ours does too.\n\nRegarding using simpler clustering methods, as both [C] and [E] utilize a form of k-means, we replace our clustering loss with a deep K-means clustering module [I] in order to have an apples-to-apples comparison. We report results in the table below: in short, it performs poorly. We suspect that this is due to the well-known downfall of K-means: it cannot recover from a degenerate state (multiple means being assigned to a single cluster). Our method, however, enforces that the distributions between prototypes and features match, and will thus be robust to this condition.\n\n| | K-means | Sliced Wasserstein |\n|---|---|---|\n| Localization | 99.5% | 99.9% |\n| Classification | 21.1% | 92.8% |\n| Both | 21.0% | 92.8% |\n\n**Claims on modeling occlusion**\n\nUnfortunately, the CLEVR dataset does not provide any annotation that can be used to evaluate occlusion quantitatively. We will tone down this claim to a qualitative insight, and include more qualitative results, such as _[these](https://imgur.com/a/5tV5bFr)_, in the supplementary material, to showcase more examples.\n\n* [A] Unsupervised learning of object landmarks by factorized spatial embeddings\n* [B] Unsupervised learning of object frames by dense equivariant image labelling\n* [C] Visual concepts and compositional voting\n* [D] Detecting semantic parts on partially occluded objects\n* [E] Learning deep parsimonious representations\n* [F] Conditional Object-Centric Learning from Video\n* [G] Unsupervised Learning of Object Landmarks through Conditional Image Generation\n* [H] Unsupervised Part Segmentation Through Disen- tangling Appearance and Shape\n* [I] Deep k-Means: Jointly clustering with k-Means and learning representations", " **Quantitative evaluation on complex multiple instances**\n\nWe show that our method achieves comparable results to the SOTA on self-supervised landmark detection on the datasets typically used by the literature, and that it naturally generalizes to **multiple instances** (such as faces). We also show that by removing the single-object constraint our method can be applied to object discovery, generalizing to **multiple tasks.** While datasets like COCO/Wider Face contain multiple instances, they are also much more difficult and out of reach for the SOTA in fully unsupervised methods.\n\n**Will the reconstruction loss work on images with more complex backgrounds?**\n\nAs we briefly discuss in the limitations section, our approach cannot deal with very complex backgrounds yet. This is also true for the SOTA in unsupervised methods, which are not ready for in-the-wild data. Note, however, that our approach performs well on images with different backgrounds, like those in unaligned-CelebA, and that unlike the SOTA in unsupervised learning, **it can deal with multiple instances,** which is itself a significant step towards getting such methods \"out of the lab\". We intend to explore this in future work.\n\n**How are prototypes learned?**\n\nLet us detail the process from L220-224. Within each training iteration, we first optimize encoder and decoder with fixed prototypes. We then optimize the prototypes with a fixed encoder/decoder. In Section A.2 (L534) we further describe how to sample prototypes to apply the sliced Wasserstein loss. Let us summarize this process:\n\n${D}$: descriptors\n\n$M$: number of descriptors\n\n${P}$: prototypes\n\n$N$: number of prototypes\n\n**function** TrainPrototype (${D}$, ${P}$)\n1. Calculate mixing ratio {$\\pi_m$}\n * {${D}_m$} $\\leftarrow$ divide ${D}$ into $M$ subsets where each subset is associated to a member in ${P}$ in terms of smallest $\\ell_2$ norm\n * {$r_m$} $\\leftarrow$ calculate the ratio of ${D}_m$ in ${D}$\n * {$\\sigma_m$} $\\leftarrow$ calculate variance of {${D}_m$}\n * $\\sigma$ $\\leftarrow$ calculate variance of {$\\sigma_m$}\n * {$\\alpha_m$} $\\leftarrow$ {$r_m$} +$\\sigma$\n * {$\\pi_m$} $\\leftarrow$ {$\\alpha_m$} /$\\sum\\alpha_m$\n2. Sample from GMM\n * ${\\tilde{P}}$ $\\leftarrow$ initiate empty list\n * **for** $p_m$ **in** ${P}$\n * append $\\pi_m \\times N$ samples from a Gaussian centered at $p_m$ with a predefined variance to ${\\tilde{P}}$\n3. Calculate sliced Wasserstein distance\n * $d$ $\\leftarrow$ SW_distance(${\\tilde{P}}$,${D}$)\n 4. Train prototypes\n * Optimize ${P}$ by minimizing $d$\n\nWe will include this in the supplementary.\n\n**How does the number of prototypes affect the results?**\n\nOur method does not require the number of prototypes to be exact, but performs best when it is known in advance. In the table below we ablate the number of prototypes on MNIST-Hard. If it is smaller than the number of classes, each prototype learns to represent multiple classes, as shown _[here](https://imgur.com/a/n1bGe4V)_. With 5 prototypes representing 10 classes, classification accuracy will be at most 50%: we achieve 47.4%. We can also evaluate the reverse mapping from class label to prototypes and check the consistency of assignments, which gives us 98.3% (this cannot be taken as accuracy!). If we have more prototypes than classes, prototypes learn to represent different modes of the class, as shown _[here](https://imgur.com/a/5UZbdTu)_. In general, a reasonable result can be achieved if the number of prototypes $P$ is larger than the number of classes. We will add these results to the paper.\n\n| | $P$=5 | $P$=5 (reverse) | $P$=10 | $P$=20 |\n|---|---|---|---|---|\n| Localization | 90.8% | 90.8% | 99.9% | 99.9% |\n| Classification | 47.4% | 98.3% | 92.8% | 87.7% |\n| Both | 43.0% | 89.2% | 92.8% | 87.7% |\n\n**What happens if ${L_{recon}}$ or ${L_{sw}}$ are removed?**\n\nThe reconstruction loss is the main loss term used to train encoder and decoder. Our method will not work without it. All the other loss terms are used to regulate the latent space and encourage clustering.\n\nWithout the sliced Wasserstein loss, the network can easily learn a trivial solution where one prototype represents all the features and the rest are unused, as shown _[here](https://imgur.com/a/tXUHgg1)_. We also extend the ablation study below. We will add these results to the paper.\n\n| | With SW loss | Without SW loss |\n|---|---|---|\n| Localization | 99.9% | 99.8% |\n| Classification | 92.8% | 10.2% |\n| Both | 92.8% | 10.2% |\n\n**Why are localization results not affected by different settings of training losses?**\n\nLocalization is rather easy on MNIST-hard. Nonetheless we use this dataset to ablate as it is most “controllable”. The additional loss terms are designed to regulate the learned latent space to have better classification performance, as shown in Table 4.\n\n**Typos**\n\nThank you for pointing them out. We will correct them.", " **What is the relationship to autoencoders for quantization and the Gumbel softmax?**\n\nWe will include the discussion on VQVAE and Gumbel softmax in the related works section, and add vector quantization as a baseline for comparison.\n\nIn more detail, VQVAE and Gumbel softmax approaches are relevant, but are typically used for a different purpose: having a quantized latent space that can, for example, be easily translated by a transformer in order to generate images. We do not aim for a quantized latent space, but rather prototypes that **directly relate to semantics**.\n\nThis difference makes vector quantization a suboptimal choice when it comes to clustering, as shown in the table below, where we swap our clustering method with vector quantization (everything else remains the same) on MNIST HARD with 10 prototypes:\n\n| | Sliced Wasserstein | Vector Quantization |\n|---|---|---|\n| Localization | 99.9% | 65.3% |\n| Classification | 92.8% | 23.7% |\n| Both | 92.8% | 15.3% |\n\nNote how localization somewhat works, but classification completely fails, clearly indicating that vector quantization does not solve clustering. Qualitatively, we observe that some numbers get quantized into the same prototype, and others get split into two. This leads to detections also being somewhat “off” when estimating the center of the digits. We will add these results to Table 1 in the paper.\n\n**Is the metric used in the landmark-to-keypoint evaluation valid?**\n\nWe believe it is. As the reviewer states, on datasets with roughly centered objects, placing the reference frame in the middle of the image may result in better linear mappings than placing it in a corner. Similarly, if the linear regressor has an intercept, it can learn to exploit these biases, even ignoring the input keypoints.\n\nWe train a linear regressor **without an intercept,** and place the origin on the **top left corner of the image,** the same as all baseline methods. The estimated landmarks are thus entirely dependent on the detected keypoints.\n\nIt is true that **the keypoints themselves** could exploit dataset biases such as objects being centered, but we evaluate on unaligned datasets, where the objects (e.g. faces in CelebA) are slightly misaligned: see L239-245.\n\n**How we deal with multiple instances when converting to keypoints**\n\nOur linear regressor cannot be identical to that used by previous methods because our approach can assign two or more keypoints to the same prototype. Our regressor needs more inputs: $2 \\times K \\times P$ ($K$: number of keypoints; $P$: number of prototypes), of which only $2 \\times K$ are non-zero, the same as for the baselines (see L289-300). As the reviewer noticed, we forgot to mention that if multiple keypoints are assigned to the same prototype, we sort them **by their x coordinate from left to right** to fill the input tensor: we will add this detail to the paper.\n\nAs stated above, the frame of reference for the keypoints is the **top left corner of the image** and the regressor **does not have a bias,** as previous papers do.\n\n**Can we conclude anything from the comparison in table 3?**\n\nWe use the same evaluation metric as all previous self-supervised keypoint learning papers, mapping keypoints to landmarks using a simple linear regressor without an intercept. Keypoints are in the same reference frame, with the origin at the top left corner of the image. Our regressor is essentially equivalent to that used by previous works, with a small modification to account for multiple instances. The comparison is thus meaningful. Our approach delivers comparable results while being applicable to multiple instances, unlike any of the baselines.\n\nWe believe we have answered the reviewer's questions regarding this evaluation, and kindly request clarification if that is not the case. We will reply within the reviewer-author discussion window.\n\n**What is the intuition behind the Sliced Wasserstein loss?**\n\nThe intuition behind the loss is that the prototypes, if learned well, should resemble the data. In other words, their distributions should match. For example, should we have a perfect method for MNIST, the latent space should be 10 very narrow islands (almost a delta function), and each prototype should resemble each island. This is what we try to enforce on our prototypes via the sliced wasserstein loss, which minimizes the distance between prototype and feature distributions.\n\nWe show results on MNIST-hard. Without the sliced Wasserstein loss the prototypes collapse, as seen _[here](https://imgur.com/a/tXUHgg1)_: all features are assigned to a single prototype and the rest are not used.\n\n| | With SW loss | Without SW loss |\n|---|---|---|\n| Localization | 99.9% | 99.8% |\n| Classification | 92.8% | 10.2% |\n| Both | 92.8% | 10.2% |", " The following work proposes a formulation of the unsupervised latent-landmark learning autoencoder where the landmark bottleneck makes no assumptions about the number of instances of a keypoint to appear in each image -- an important distinction as prior approaches assume each landmark to appear exactly once per image. As with prior methods, two encoders are used, one for pose-invariant feature information, and the other for feature-invariant pose information. Output from the two encoders is combined and jointly decoded to reconstruct the original input. Pose invariance is achieved by feeding one encoder a thin-plate-spline warped image. Landmark activation locations are represented with predicted heatmaps. Unlike previous methods, these heatmaps are not assumed to have a single peak. Rather, the top-K activations are identified via NMS and retained for reconstruction. An online K-means loss is used to cluster feature representations at landmark locations where the resultant centroids are the resultant landmark descriptor. Strengths\nI think this work attempts to tackle an important issue, which is the applicability of unsupervised landmark methods. Due to the 1-instance-per-image assumption, these methods tend to be impractical when there are unknown number of instances or out-of-plane rotations.\n\nWeaknesses\n- Unless I've misunderstood a key component, the proposed method appears to be closely related to the autoencoders used in quantization literature, most notably VQVAE (van den Oord, 2017) and gumbel-softmax approaches. There's already a lot of extant literature based on top of that, learning discrete visual tokenizations of images that are critically neglected from this study. At the very least, these should serve as baselines for the clustering approach used in this work.\n- There are some validity issues regarding the landmark-to-keypoint evaluation. \n - The first issue is directed towards the validity of the metric itself, and not specific to this work. Prior methods using this evaluation perform a linear regression without an intercept to map landmarks to keypoints. The lack of the intercept makes it such that the regression result is highly sensitive to the positioning of the origin in the coordinate space. On centered objects, placing the origin in the middle of the image will often result in better linear mappings than should the origin be placed in any of the 4 corners of the image.\n - The linear mapping formulation isn't really applicable to the landmark formulation in this method, as the authors noted in the text. It's not clear whether or not the linear layer used by this work includes the bias term. Furthermore, I'm not sure whether the tensor representation implies an ordering to the multiple instances of the same landmark, or this has been handled by the authors in their specific formulation. \n - In general, I don't think we can conclude anything from the comparison in table 3, though I understand the necessity of attempting the comparison regardless. Assuming my understanding is correct, this should probably be noted in the paper. - It would be great if the authors could clarify for me what is meant by the intuition behind the Wasserstein loss (that the distributions of features and prototypes should match).\n- Please address my concerns regarding the potential relationship to vector quantization literature. limitations appropriately addressed", " This paper proposed a new method for learning task-agnostic keypoints in an unsupervised way. The proposed method encodes semantics into the keypoints by reconstructing the original image from sparse keypoints descriptors. A group of prototypes is learned during training and the descriptors are constrained to be around the prototypes. The proposed method can predict keypoints for multiple instances and achieves comparable results with the state of the art. Strengths:\n1. The task of unsupervised keypoint localization for multiple instances is interesting and may facilitate future research.\n2. The proposed method is technically sound and achieves comparable results with the state of the art. \n3. The visualization of the prototype on the MNIST-Hard dataset is interesting. It is better to show the visualization of face/body keypoints too. \n\nWeakness:\n1. Multiple instances supporting is an important part of the proposed method, however, only a single instance quantitative evaluation is shown for some complex tasks (face/ body keypoint localization). The effectiveness of the proposed method needs to be further proved in such kinds of tasks. \n2. The reconstruction loss L_recon seems to be mandatory in the proposed method and it has been performed for all the experiments. But this loss may affect the robustness of the method on datasets with complex backgrounds which is more common in the multi-instance dataset. The authors may need to add more experiments on such kinds of datasets, e.g. the COCO dataset, The WIDER FACE dataset. \n3. The details of how to learn the prototype is not clear in the submission. It is better to show how to update the prototype based on the sliced Wasserstein loss. \n 4. The different settings of prototype number, i.e. M, need to be carefully analyzed. 1. In Table 4, what will happen if the L_recon or L_sw is removed from the entire loss?\n2. In Table4, the localization results seem to change slightly with different settings of losses. Is it because the localization is too easy on the MNIST-Hard dataset. \n3. Some typos need to be refined, e.g. \nL. 67, 'priori'->'prior'\nIn the supplementary L. 532, P_i represents both point position and prototype. \nIn the supplementary L. 537, should 'descriptors' be 'prototype'? The authors have addressed the limitations in the submission. ", " The paper describes a method for learning keypoint detectors in an unsupervised manner. The key extension over related work is that the proposed method does not build on the assumption that only one target keypoint is present in an image. Technically this is achieved by having one keypoint heatmap/response map in the architecture that is shared among all keypoints, whereas related work estimates one heatmap per keypoint. Other technical strategies that are used to encourage equivariance and to clustering of the feature descriptors have also been used in prior work, but are combined in this work to enable fully unsupervised learning of keypoints detectors. The experimental evaluation shows that the proposed method performs comparably to related work on rather artificial datasets and simple datasets of faces and humans, while not building on the single object assumption. Strengths:\n+ The paper addresses an important problem, in that it aims to resolve the major issue that so far methods for unsupervised keypoint discovery assume that only one object is present in an image. While this is a valid assumption when learning from video streams (where objects can be segmented fairly well based on their movement), it is a very limiting factor when learning from unordered sets of images. I have myself been concerned about this issue for some time, and I appreciate that this paper makes progress towards resolving this limitation.\n+ The paper is very well written and describes the key concepts very intuitively. I have enjoyed reading the paper and could understand the key concepts already after the first reading pass.\n\nWeaknesses:\n- Key concepts presented and combined in this paper are very well known. While using a single heatmap to represent the activation of all keypoints to resolve the assumption about keypoint numbers is relevant, the contrastive learning of feature representations, clustering of feature representations, and equivariance losses are well known and widely applied.\n- The experimental evaluation is limited as it is conducted on very simple datasets. I understand that this is meant to be a proof-of-concept evaluation, but given the limited variability of the object shapes, appearance and background it remains unclear if much more simple method simple clustering of features [C,E] would suffice. I highly recommend testing the proposed method on datasets of real-world images with more complex variations in shape and appearance such as PASCAL3D+ or VehiclePart [D] or even KITTI.\n- Several related works that for unsupervised learning of landmark and part representations are not mentioned [A-C] and should be compared to.\n- The claim that the proposed method can model occlusion (e.g. l330) is not quantified and only shown qualitatively on images with very minor occlusion. I would recommend to tone down this claim as it seems unjustified.\n\n[A] Unsupervised learning of object landmarks by factorized spatial embeddings.\n\n[B] Unsupervised learning of object frames by dense equivariant image labelling.\n\n[C] Visual concepts and compositional voting\n\n[D] Detecting semantic parts on partially occluded objects\n\n[E] Learning deep parsimonious representations\n\n=====================POST REBUTTAL=========================\nAfter reading the other reviews and the rebuttal I vote for accepting the paper. My concerns have been addressed by the additional results and clarifications provided in the rebuttal. I also think that the concerns of the other reviewers were addressed sufficiently. Therefore I raise my initial score to 7. I would like the authors to comment on the weaknesses that I described. I will carefully reconsider my rating depending the authors response especially concerning the lack of novelty and simplicity of the datasets in the experiments, as well as the discussion with the other reviewers. The limitation section addresses the core limitations adequately and concisely." ]
[ -1, -1, -1, -1, -1, 5, 4, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "HX84vGiyX0NS", "nips_2022_VOPiHQUevh5", "nips_2022_VOPiHQUevh5", "nips_2022_VOPiHQUevh5", "nips_2022_VOPiHQUevh5", "nips_2022_VOPiHQUevh5", "nips_2022_VOPiHQUevh5", "nips_2022_VOPiHQUevh5" ]
nips_2022_mmzkqUKNVm
Semantic Diffusion Network for Semantic Segmentation
Precise and accurate predictions over boundary areas are essential for semantic segmentation. However, the commonly used convolutional operators tend to smooth and blur local detail cues, making it difficult for deep models to generate accurate boundary predictions. In this paper, we introduce an operator-level approach to enhance semantic boundary awareness, so as to improve the prediction of the deep semantic segmentation model. Specifically, we formulate the boundary feature enhancement process as an anisotropic diffusion process. We propose a novel learnable approach called semantic diffusion network (SDN) for approximating the diffusion process, which contains a parameterized semantic difference convolution operator followed by a feature fusion module and constructs a differentiable mapping from original backbone features to advanced boundary-aware features. The proposed SDN is an efficient and flexible module that can be plugged into existing encoder-decoder segmentation models. Extensive experiments show that our approach can achieve consistent improvements over several typical state-of-the-art segmentation baseline models on challenging public benchmarks.
Accept
This submission got a mixed rating: 1 borderline reject, 2 week accept and 1 accept. Most of the concerns lie in the explanations on the details and experimental comparison with certain baselines/variants. The authors addressed them well by providing additional experiment results in their response. The remained concern from the reviewers giving borderline reject lies in theoretical justification of the proposed operation. The authors managed to provide a theoretical interpretation from the viewpoint of diffusion process, which partially addresses the reviewer's question. Overall, all the reviewers agree that this submission introduces a simple and effective method for the segmentation field. The effectiveness of the proposed method has been validated via extensive experiments. The performance improvement is significant. The manuscript is written clearly. The contribution is sufficient. Based on the above considerations, AC recommends accept for this submission.
test
[ "GsXGEcqXnI", "P9P3sVR9onQ", "HpTr6xyASsE", "GL3rN-oV06", "LIiWVtFrerV", "--34I9uRzb_", "tDNmn0t9Sn3", "aymS7T9leO7", "aVi__MHx_7d", "_bZcgvHBOWr", "fBol6WxEeRF", "0yAo3U9V-X", "f8Lm8YWO-NA", "lHqnuTWkErG", "M1dTtAZBlh", "1oXkI9GaXa", "stKc-4GCeQWN", "aup8-JJn_Z0", "Y75Q0hcLXEG", "PbmvXh5ylsT2", "T1hWTRYNTIE", "jbDNvxbLH7v", "O6j1khky9IU", "vS9ZHl1gGB", "47bxz-N6N9O", "XO1wjhUFHEj", "RxJH8D_mN7z" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer:\n\nWe feel very honored and glad to hear from you! Your comments have helped our paper become stronger. I would like to extend my sincere thanks to you!\n\n**About the code:** The codebase involved in this paper, training config, and model checkpoint will be public!\n\n**About rating score:** We notice that the rating score is still 4 (Borderline reject). Since the rebuttal deadline is approaching, we are wondering that is there any other concerns. If so, please feel free to let us know and we will try our best to address your concerns! Honestly, we really wish to get a higher rating score from you. \n\nBest regards\n\nAuthors", " Dear Reviewer GLui:\n\nSincerely thanks for your final appreciation of our work! \nIt's really good news that your concerns have been addressed during this discussion phase. \nWe will improve the final version of this paper according to the valuable suggestions from reviewers. \nActually, our method is easy to follow. To ensure the reproducibility, the complete code, detailed config files and trained models will be public. \nSincerely thanks and expect your higher rating!\n\nBest regards! 
\\\nThe authors of “Semantic Difference Convolution for Semantic Segmentation”.", " * I would like to express my appreciation to the authors to take the time to my questions. Most of my concerns were resolved in the rebuttal phase; mainly related to the limitations and novelty I have addressed. The author's rebuttal to my questions and other reviewers' questions have resolved my misunderstanding and I could understand this paper in detail. In addition, the newly exhibited experimental result could express the empirical analysis of the proposed operator. Therefore, I \bwill increase my rating through the discussion phase.\n\n* Furthermore, for reproducibility and the improvement of the deep learning society, I would strongly address that the code would be published in public. In hopeful expectation, I will adjust my rating in good faith.", " Dear reviewer: \n\nSincerely thanks for your appreciation again! We are honored by this! \\\nIt's really good news that our reply has addressed your concerns. Thanks for your time and valuable suggestions! They are critical to for us further improve this paper.\n\nBest regards!
 \\\nThe authors of “Semantic Difference Convolution for Semantic Segmentation”.\n", " Dear Reviewer: \n\nThanks a lot for your time and efforts in reviewing our paper! \\\nIt's really good news that our reply has addressed your concerns. If you have any other questions, please feel free to contact us, we will try our best to address them in time! \\\nSincerely thanks for your appreciation again! We are honored by this!\n\nBest regards! \\\nThe authors of “Semantic Difference Convolution for Semantic Segmentation”.\n", " Thanks for the detailed rebuttal. It has addressed all my concerns. Therefore I keep my rating as \"accept\".\n\n", " Dear Reviewer:\n\n&nbsp;\n\nSincerely thanks for all your efforts and suggestions! They are important for us to further improve this paper!\n\nIt's a great honor to get your recognition! Thanks for raising the score. We will try our best to improve the final version of the paper.\n\n&nbsp;\n\nBest regards. \\\nThe authors of “Semantic Difference Convolution for Semantic Segmentation”.\n\n\n", " Thanks for the detailed rebuttal. I addresses my concern. So I keep my rating as weak accept.", " Thanks a lot for the efforts in addressing all the concerns raised by reviewers!\n\nThe authors show more experimental results on SOTA models, such as DeepLab-v3+ and SegFormer.\n\nI do not have more concerns on the novelty of this work, even though the contribution is incremental based on DC. However, it introduces semantics into the operation, which leads to improved segmentation performance and supposed to accept by NeurIPS. \n\nThe authors should improve the writing for publication. By reading all the comments and feedback, I insist on my suggestion and believe this work should be accepted.", " Dear reviewers, \n\nThanks a lot for your time and efforts in reviewing our paper. We have tried our best to address all mentioned concerns. We would appreciate it if you could take a look at our response. As the discussion deadline is approaching, your feedback is very important to us, and if there are any new questions, we can therefore reply in time. \n\nsincerely yours\nAuthors", " Dear reviewer Glui: \n\nHello.\n\nThank you for your review of this article and your contribution to the academic community! Thanks for your constructive suggestions! In our response comments, we made a detailed experimental and theoretical explanation for your questions and concerns, including:\n\nA. In [Response to ''Questions'' comments](https://openreview.net/forum?id=mmzkqUKNVm&noteId=PbmvXh5ylsT2), we made the following responses: (1). further mathematical illustration, reference, and further discussion about the semantic feature in SDC; (2). Further analysis and experiments of the impact when the SD value is minimal; (3). The motivation of SDC and further applications. \n\nB. In [Response to ''Limitations'' comments (part-1)](https://openreview.net/forum?id=mmzkqUKNVm&noteId=Y75Q0hcLXEG), we provided a further mathematical (theoretical) analysis of the performance of the proposed method. \n\nC. In [Response to ''Limitations'' comments (part-2)](https://openreview.net/forum?id=mmzkqUKNVm&noteId=aup8-JJn_Z0), we further clarified the novelty, provided the performance under the new metric, and further Grad-CAM based visualization. \n\nD. In [Response to ''Limitations'' comments (part-3)](https://openreview.net/forum?id=mmzkqUKNVm&noteId=stKc-4GCeQWN), we provided further discussion with the parameters in the SDC operator (input and output size, padding, dilation, kernel-size, strides…)\n\n&nbsp;\n\nWe would like to know whether our reply has addressed your concerns. If you have any questions, please feel free to let us know, and we will try our best to address your concerns. We sincerely hope to get recognition from you and even the academic community. If our reply makes you feel satisfied, we also sincerely hope to get a higher rating score from you.\n\n&nbsp;\n\nBest regards! \\\nThe authors of “Semantic Difference Convolution for Semantic Segmentation”", " Dear reviewer XkWA:\n\nHello!\n\nWe are really honored by your appreciation! Please allow us to express our sincere thanks! \n\nIn our response comments, we have carried out targeted explanations and experiments for your suggestions and concerns, including the further explanation of table 5(a), the comparison of SDC and CDC, and the further analysis of the semantic difference term. \n\nWe would like to hear from you further. If you have any questions, please feel free to let us know. We will try our best to solve your concerns.\n\nBest regards. \nThe authors of “Semantic Difference Convolution for Semantic Segmentation”.\n\n", " Dear reviewer GWNK:\n\nHello! \n\nFirst of all, we would like to express our heartfelt thanks to you for your appreciation of this work and your constructive suggestions! \n\nIn our response comments, we have carried out explanation and experiments based on your suggestions and concerns, including the further discussion about the wording of “intra-class boundary”, explanation (including the analysis and experiments) of the normalization in the SD-term, and further comparisons in Table 5(b). \n\nWe sincerely look forward to hearing from you. If there is any question or concern, please feel free to let us know. We will try our best to address your concerns. At the same time, if you are satisfied with our work and response, we hope to obtain your further recognition and a higher rating score from you. \n\nBest regards. \nThe authors of “Semantic Difference Convolution for Semantic Segmentation”.\n", " Dear reviewer U8Zf:\n\nHello! \n\nFirst of all, we sincerely thank you for your appreciation of our work! We are honored by this!\n\nIn our response comments, we have carried out explanations and experiments for your suggestions and concerns, including the further discussion about the semantic feature $f$, the further comparison between ours and other works (Gated-SCNN, DeepLab V3+, and SegFormer), further explanation of table 5(b). \n\nWe sincerely look forward to hearing from you. If there is any question or concern, please feel free to let us know. We will try our best to address your concerns. At the same time, if you are satisfied with our work and response, we hope to obtain your further recognition and a higher rating score from you. We also hope that this article can be honored to receive the attention and recognition of the entire academic community!\n\nBest regards. \nThe authors of “Semantic Difference Convolution for Semantic Segmentation”.\n", " Sincerely thanks for your appreciation of our work. Hoping our response will address your concerns.\n\n---\n\n**1: Further comparison with DeepLab V3+.**\n\nThanks for your advice. We will add it in the revised paper. \\\nHere, we add a comparative experiment with DeepLab V3+ (ResNet-101) in the table below, hoping to solve your concerns. Experiments show that our approach can bring a +1.2% mIoU performance improvement when using DeepLab V3+ (ResNet-101) as the baseline.\n\n\n| Model | mIoU (m.s.) |\n| :---- | :----: | \n| DeepLab V3+ | 80.5 |\n| DeepLab V3+ + Ours | 81.7 (+1.2) |\n\n---\n\n**2: About the baseline performance in Table 5(b).**\n\nThank you for your careful review. \\\nTable 5(b) reports the Boundary F-score of each method within a width of 1-px around the boundary. In fact, all current segmentation methods perform poorly on this metric. In contrast, the Boundary F-scores of the various methods improved considerably over a wider area (e.g., 3-px, 6-px, 9-px). In Table 3, we also compared the boundary F-score of two baselines under the 3-px mode and our method can still significantly improve the baseline even under the 3-px mode. \n\n\n", " Sincerely thanks for your appreciation of our work. Hoping our response will address your concerns. \n\n---\n\n**1: Further clarification of the semantic feature $f_{pi}$.**\n\nThanks for your advice. \\\nWe will add the necessary description in the revised version according to your suggestion. An academic consensus is that as the number of layers increases, features will contain more high-level semantic information (category-level, object-level). In principle, SDC/SDM used the deeper features of the deep model to refine the lower-level features; for example, in our SDC/SDM, the feature map of stage $i+1$ was used as the semantic feature (semantic guidance) to process the features of stage $i$.\n\n\n---\n\n**2: Further comparison with more boundary-aware methods, such as Gated-SCNN.**\n\nThanks for your valuable comments! \\\nHere, we report the comparison between our approach and the Gated-SCNN under the same baseline (DeepLab v3+ with WideResNet as the backbone). The training protocol is consistent with that in Gated-SCNN. \nOur approach outperforms the Gated-SCNN by +0.8% mIoU.\n\n\n| Model | mIoU (m.s.) | \n| :---- | :----: |\nGated-SCNN | 80.8 |\nOurs | 81.6 (+0.8) | \n\n\n\n---\n\n**3: Further comparison with SOTA, e.g., SegFormer.**\n\nThanks for your advice and we will add this to the revision. \\\nHere we choose the SegFormer as the baseline to evaluate our approach’s performance on ADE20K. We found that our SDM can improve the performance of Segformer by +0.9$\\sim$+1.4% mIoU when combining backbone models with different scales. \n\n| Backbone | SegFormer | SegFormer + Ours \n| :----: | :----: | :----: | \n| MiT-B0 | 37.4 | 38.8 (+1.4) | \n| MiT-B4 | 51.1 | 52.1 (+1.0) | \n| MiT-B5 | 51.8 | 52.7 (+0.9) | \n\n\n", " -------------\n\n\n**5: Further discussion with the parameters in the SDC operator (input and output size, padding, dilation, kernel-size, strides…).**\n\nThank you for your advice. Here, we refer to Page [5] to discuss the parameters involved in the use of SDC, which we will add to the supplementary materials. \n\nSDC takes two 4-D tensor as input, namely the feature map $X \\in \\mathcal{R}^{B \\times C_i \\times H \\times W}$ and the semantic feature map $F \\in \\mathcal{R}^{B \\times C_f \\times H \\times W}$. Parameters such as kernel-size in SDC have the same meaning as those in vanilla convolution. Given a kernel tensor of shape $w \\in \\mathcal{R}^{C_o \\times C_i \\times h \\times w}$, where $C_o, C_i, h, w$ is the out_channels, in_channels, filter_height, filter_width, this op performs the following: \n1. Extract the feature patch $x \\in \\mathcal{R}^{B \\times C_i \\times h \\times w}$ from $X$ and the semantic feature patch $f \\in \\mathcal{R}^{B \\times C_f \\times h \\times w}$ from $F$, according the parameter settings (padding, strides, dilation, kernel-size). \n2. Calculate the central difference map $d \\in \\mathcal{R}^{B \\times C_i \\times h \\times w}$ via $d[b, c, h, w] = x[b, c, h, w] - x[b, c, h_c, w_c] $. Then, flatten d to shape $B \\times C_i h w$\n3. Calculate the semantic difference map $s \\in \\mathcal{R}^{B \\times 1 \\times h \\times w}$ via $d[b, c, h, w] = x[b, c, h, w] - x[b, c, h_c, w_c] $. Then, repeat $C_i$ times for $s$ at the dimension-1 and flatten it by following 1, thus, we get the $s$ matrix of shape $B \\times C_i h w$. \n4. Flatten the kernel tensor $w$ to a matrix of shape $ C_i h w \\times C_o$. \n5. Performing the the element-wise multiplication and matrix multiplication to get the output matrix $o \\in \\mathcal{R}^{B \\times C_o}$ via $o = (d \\cdot s) \\otimes w$. \n6. Reshape $o$ from $B \\times C_o$ to $B \\times C_o \\times 1 \\times 1$ via as the output tensor of the current location. \n\nThus, given the input feature maps of hight/width of $H/W$, the dilation-rate of $d$, the kernel-width/height of $k$, the stride of $s$, the padding size of $p$, the height/width ($H_o/W_o$) of the output feature map should be $H_o= [H + 2 p - d(k-1) - 1]/s+ 1$, and $W_o = [(W + 2p - d(k-1) - 1)/s] + 1$. \n\n\n\n", " **2: About the novelty.**\n\nThank you for your recognition of the simplicity and effectiveness of our work. \\\nHere, we discuss in detail the contributions and novelty in this paper.\n+ This paper analyzes for the first time the reason why it is difficult to model fine boundaries in deep networks, that is, the vanilla-convolution operator tends to blur the local details, and the stacked multiple convolutional layers aggravate the ambiguity problem, which is the intrinsic cause of the difficulty in modeling the boundary details in deep models.\n+ For the intrinsic reasons of the blurred boundary information in the deep model, we design a very simple and efficient boundary enhancement operator, termed SDC, which can strengthen the real object boundary.\n+ In order to make our method easily fit with various deep segmentation models, we designed a flexible and lightweight module, SDM, based on SDC. This module can be inserted into most existing deep segmentation models as a neck part, without modifying the original network structure.\n+ Our approach significantly improves the segmentation performance (mIoU, and Boundary F-Score) of the baseline models on multiple current public datasets.\n\n\n**3: Further evaluation of new metrics.**\n\nThank you for your comments. In our paper, we have used mIoU and Boundary F-score to measure the performance of the model at the object level and Boundary detail level respectively, see Table 1 and Table 2 for mIoU, Table 3 and Table 5 for the Boundary F-score. \n\n\nFernandez-Moral. [1] proposed a new metric that accounts for both global and contour accuracy in a simple formulation. \nKyungsu Lee [2] proposed a new metric, which is the boundary-oriented intersection over union B-IoU for quantitative evaluation of the shapes and boundaries of the model. \nBowen Chen [3] proposed the Boundary IoU (Intersection-over-Union), a new segmentation evaluation measure focused on boundary quality. We choose [3], the latest of the three, as the evaluation metric to compare the baseline (HRNet48-OCRNet) and ours (baseline + SDM). The performance is as follows. Clearly, our SDM significantly improves the performance of the Baseline model on the boundary metric [3]. \n Method | Boundary IoU \n| ---- | :----: | \n| Baseline | 62.4 |\n| Baseline + SDM | **66.1 (+3.7)** |\n\n[1] Fernandez-Moral, Eduardo, et al. \"A new metric for evaluating semantic segmentation: leveraging global and contour accuracy.\" 2018 IEEE intelligent vehicles symposium (iv). IEEE, 2018. \n[2] Lee, Kyungsu, et al. \"Boundary-oriented binary building segmentation model with two scheme learning for aerial images.\" IEEE Transactions on Geoscience and Remote Sensing 60 (2021): 1-17. \n[3] Cheng, Bowen, et al. \"Boundary IoU: Improving object-centric image segmentation evaluation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. \n\n\n\n**4: Further visualization (Grad-CAM).**\n\nThank you for your comments. **Please check Section E.3 of the revised supplementary material for the Grad-CAM visualization comparison between the baseline model (HRNet48-OCRNet) and ours (baseline + SDM).** \nWhen performing the Grad-CAM visualization, we set the feature map fed into the decoder as the target layer's feature map. \nIt is clear that SDM makes the activation region of the model more concentrated inside the object, and the boundary of the activation region is more consistent with the real semantic boundary of the object.\n\n\n", " Thank you for your comments! We will add the relevant contents you mentioned to the appendix according to your suggestions, including theoretical analysis of performance, visualization based on grad cam, and performance under new evaluation metrics.\nIf you are satisfied with our reply, we sincerely hope you can improve your score.\n\n**1: Further mathematical analysis of the superior performance of the proposed method.**\n\nHere, we will combine the physical model of the classical anisotropic diffusion to illustrate our method theoretically. (**Please check the revised supplementary material for details**)\n\nConsidering a parameterized deep model $F_{\\theta}(\\cdot)$ mapping an image $I$ to pixel-wise feature maps $X$. The network contains $L$ stacked neural layers, we denote the features from the $\\ell$-th layer as $X^\\ell$. Consider the index of two layers $\\ell_2 > \\ell_1$. An academic consensus [r1_1,r1_2, r1_3] is that with the deepening of the number of layers and the expansion of the cumulative receptive field, features will become more abstract and high-level, and more attention will be paid to category-related (object-level) semantic information. \nWhile the traditional convolution operator fuses local features, it also indiscriminately blurs the boundary information of local details, which is the inherent reason for the fuzzy prediction of the deep model at the boundary. Therefore, an intuitive idea is to improve the Conv operator to make it have local perceptual anisotropy. \n\nTo this end, we turn to the following partial differential equation of the anisotropic diffusion process: \n\n$\n\\partial u_t / \\partial t = \\text{div}(g(\\nabla f) \\nabla u_t),\n$ \n$ \nu_0 = x^{\\ell_1}, \n$ \n\nwhere $f = x^{\\ell_2}$ refers to the higher-level semantic feature of the input, which is fixed to the current diffusion process. $u_0=x^{\\ell_1}$ represents the lower-level feature that we want to refine, and $f$ and $u$ maintain the same width and height by interpolation. \n$t$ is the time step, $\\text{div}$ is the divergence operator and $\\nabla$ is the gradient operator. \n$g(\\cdot)$ is the so-called diffusion coefficient function, which is sensitive to changes (gradient) in local semantic features. \nThe above equation is based on the theory that mass is conserved during diffusion, rather than being created and destroyed. Similar diffusion models are widely used in traditional image filtering theory [r1_4, r1_5]. \n\nThe physical meaning described by this partial differential equation is that the diffusion in places where the local high-level information changes violently (more likely at the category boundary) should be treated with caution (to avoid the boundary being blurred). For places where the high-level information changes gently, the diffusion process should be smooth (the false boundary caused by the texture inside the object should be blurred).\n\nThis partial differential equation can be approximately solved by the finite difference method, for example:\n\n$u_{t+1}[h, w] = u_{t}[h, w] + \\delta, \n$\n$$\n\\text{where } \\delta = g(f[h-1, w] - f[h, w]) * (u_{t}[h-1, w] - u_{t}[h, w]) \\\\ + g(f[h+1, w] - f[h, w]) * (u_{t}[h+1, w] - u_{t}[h, w]) +g(f[h, w-1] - f[h, w]) * (u_{t}[h, w] - u_{t}[h, w]) +g(f[h, w+1] - f[h, w]) * (u_{t}[h, w+1] - u_{t}[h, w])\n$$\n\nHowever, the stability and convergence of the finite difference method require the fine setting of boundary conditions and related parameters, and its iterative operation process will bring significant time and memory consumption, especially in the training phase, which is easy to cause numerical instability. \n\nOur SDC/SDM uses a learnable parameterized mapping to model the PDE solving process (**Please check the revised supplementary material for details**), and a similar method of learning PDE solution through the neural network can be found in [r1_6]. \n\nWe also compared our SDC/SDM with the finite difference method (iterative to convergence, 20 runs to take the average performance) to process the backbone's features in the semantic segmentation model. Our learnable procedure is much better and significantly more stable than the finite difference method. This further explains the superiority of our method.\n\n\n| Solver | $~~~~$mIoU |\n| :---- | :----: | \n| Finite difference method | 80.2 $\\pm$ 1.4 | \n| Our SDC/SDM | 82.9 $\\pm$ 0.2| \n\n\n\n\n\n\n\n[r1_4] Pietro Perona and Jitendra Malik. Scale-space and edge detection using anisotropic diffusion. T-PAMI 1990. \n[r1_5] Guillermo Sapiro. Geometric partial differential equations and image analysis. Cambridge University Press. \n[r1_6] Johannes Brandstetter, Daniel Worrall, Max Welling. Message Passing Neural PDE Solvers. ICLR 2022. \n[r1_7] Yuehaw Khoo, Jianfeng Lu, Lexing Ying. Solving parametric PDE problems with artificial neural networks. EJAM. \n\n\n", " Thank you very much for your patient review. We will try our best to address all your concerns, and we sincerely hope to get your appreciation of this work. If you feel that your concerns have been better addressed, we sincerely hope the score could be improved.\n\n---\n\n **Q1-1. About the mathematical illustration, reference, and further discussion about the semantic feature $f$ in SDC.**\n\nThank you for your suggestion, and we will add this to the revised version. \n\nConsidering a parameterized non-linear deep learning model $F_{\\theta}(\\cdot)$ mapping an image $I$ to high-order pixel-wise feature maps $X$. The network contains $L$ stacked differentiable neural layers, we denote the output features from the $\\ell$-th layer as $X^\\ell$. Consider the index of two different layers $\\ell_1$ and $\\ell_2$ , where $\\ell_2 > \\ell_1$. An academic consensus [r1_1,r1_2, r1_3] is that with the deepening of the number of layers and the expansion of the cumulative receptive field, features will become more abstract and high-level, and more attention will be paid to category-related (object-level) semantic information, while the lower-level (high resolution) features contain more texture and detail [r1_1]. \\\nSDC takes the higher-level features $X^{\\ell_2}$ as the semantic feature as the guidance to enhance the boundary of low-level feature $X^{\\ell_1}$ from the $\\ell_1$-th layer, which blurs textures (false boundary) of the inner-Object while enhancing the real boundary between different categories. \n\n+ [r1_1] Gedas Bertasius, Jianbo Shi, Lorenzo Torresani. High-for-Low and Low-for-High: Efficient Boundary Detection from Deep Object Features and its Applications to High-Level Vision. ICCV 2015. \n+ [r1_2] Young-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie. Feature Pyramid Networks for Object Detection. CVPR 2017. \n+ [r1_3] Jie Xu, Huayi Tang, Yazhou Red, Liang Peng, Xiaofeng Zhu, Lifang He. Multi-level Feature Learning for Contrastive Multi-view Clustering. CVPR 2022. \n\n---\n\n**Q1-2. Concerns about the impact when the SD value is minimal.**\n\nThis is a valuable question!\n\nAs we expect, the SD value of SDC will indeed reach a very small value inside the object, which can effectively filter out the false boundaries caused by the inner-object texture. Because the role of SDC is to highlight category boundaries, the features of SDC cannot be directly used to produce segmentation results. Therefore, we have further designed a lightweight and efficient SDM module that flexibly combines the SDC features with the features from the backbone, avoiding the problem that the internal features of the object are completely suppressed. We illustrate this with the following experiments on Cityscapes-Val. \n\n| Feature fed into the decoder | $~~~$ mIoU |\n| :---- | :----: | \n| Backbone-feature | 81.1 |\n | SDC-feature | 61.4 |\n | SDM-feature | 82.9 (+1.8) | \n\n\nWhen using OCRNet (HRNet48) as the baseline, the final performance is significantly degraded if only SDC features are used as input to the decoder. After SDM fuses SDC features with Backbone's features, the model's performance significantly exceeds baseline. \n\nThis could be also demonstrated in Figure 3 of the supplementary material. The SDM module fuses SDC features with the backbone features flexibly, which not only preserves clear boundaries but also avoids excessive suppression of internal features of objects.\n\n---\n\n**Q2. The motivation of SDC and further applications.** \n\n+ **Motivation:** The motivation is very intuitive, that is, we find that the vanilla convolution operator will cause local features to be blurred and smooth, which makes it difficult for CNN to model fine boundary information. This weakness is evident in the semantic segmentation task. In this paper, we propose a new operator that can filter out the pseudo-inner-object boundary and enhance the real category boundary, significantly alleviating the problem of blurred boundaries caused by Vanilla-Conv. \n\n+ **Further applications:** In the future, we plan to extend SDC/SDM to other visual topics, such as instance segmentation, panoptic segmentation, image super-resolution, and image denoising.\n", " \nThank you sincerely for your appreciation and constructive comments on this paper. \nWe will do our best to address your concerns, and revise the paper according to your suggestion. \n\n---\n\n**1: Can this method generalize to other dense estimation tasks?**\n\nActually, the proposed SDC and SDM can also be applied to various vision tasks, such as instance segmentation, which are served as our future work.\n\n---\n\n**2: Concerns about the wording of “intra-class boundary.”**\n\nThis is a good question! \\\nWe agree with you that only the output layer contains exact category-level information. \nThe feature before projecting to the class number dimension also contains the implicit category information.\nIn the beginning, we tried to add an auxiliary head supervised by cross-entropy loss between ground-truth and prediction for the output feature of each stage of the backbone and use the output score of auxiliary heads to compute the semantic difference term.\nWe found that the performance is similar to using the feature directly while introducing too many parameters attributed to the additional auxiliary heads especially when the dataset has a large number of classes. Besides, as the features of shallow layers (stage 1 and 2 of the backbone) lack enough semantic information, the segmentation performance of their corresponding auxiliary heads is poor. \nTherefore, we choose the current simple but effective implementation.\n\n---\n\n**3: Concerns about the potentially unstable training caused by the unnormalized SD term.**\n\nGreat question! \\\nActually, we have not found the unstable training phenomenon that you are concerned about. \nThe reason may be that even though the semantic feature fed into SD is not normalized, the BN layer, which is widely used in the whole network, avoids the input features in SD from producing abnormal changes. \\\nWe have also compared some different measures of semantic difference term in Table 4 of the supplementary material. Specifically, we replace the Euclidean distance with cosine distance with a limited range of $[0, 1]$. And we found that there was no significant difference. For clarity, we list the main results in the table below.\n\n| measure of semantic difference term | mIoU |\n| :---- | :----: | \n| Euclidean Distance | 82.9 | \n| Cosine Distance | 82.8 |\n\n---\n\n**4: Further experimental results in Table 5(b).**\n\nThank you for your good suggestion! In the following table, we reorganized Table 5(b) as you recommended. \n\n | Model | Boundary F-score \n| :---- | :----: | \n| Baseline | 65.1\n| Baseline+Ours | 69.5\n| Baseline+DenseCRF | 67.2\n| Baseline+DenseCRF+Ours | 69.7\n| Baseline+SegFix | 68.8\n| Baseline+SegFix+Ours | 70.1\n| Baseline+InverseForm | 69.1\n| Baseline+InverseForm+Ours | 70.5\n", " Thanks for your advice! We are pleased to address all your concerns.\n\n---\n\n**1: Further analyses for Table 5(a).**\n\nThanks for your valuable advice! We will explain it more clearly in the final version. \n+ **The calculation of $\\rho$:** As the caption of Table 5, the $\\rho$ in the 4-th column of Table 5(a) denotes the proportion of the computation cost introduced by our approach in the whole network, which is calculated by (\"Baseline+Ours\" - \"Baseline\") / \"Baseline+Ours\". Taking the 4.22% in the first row of Table 5(a) as an example, it is obtained by $\\frac{73.5 - 70.4}{73.5}$. \n+ **Percents parameters introduced for each convolution layer:** In fact, our SDC does not introduce additional parameters, and has the same parameters as vanilla convolution and CDC, as the semantic distance (SD) term in Eq.(3) is parameter-free. In each SDM, the parameters come from the two $1\\times1$ convolution $\\phi$ and $\\psi$ in Eq.(6) and Eq.(8), which aim to project the semantic feature and fuse the input feature with the boundary-aware feature, respectively. \n+ **Does CDC also introduce more parameters to the network?** As mentioned above, both CDC and our SDC do not bring more parameters and have the same parameters as vanilla convolution. Thus, if we replace the SDC in SDM with CDC, it will introduce additional parameters as well.\n\n---\n\n**2: About the pure improvement of the semantic difference term, and the performance of replacing SDC with CDC.**\n\nGreat question! This is no doubt a critical ablation study. \\\nActually, we have conducted such experiments to verify the effectiveness of the proposed semantic difference term by replacing the SDC in SDM with CDC and vanilla convolution. \\\nPlease refer to Table 4 (a) in the main paper for details. After replacing SDC in SDM with CDC (CDC has no semantic difference term), the performance will drop abruptly (SDC: 82.9% mIoU, CDC: 80.4% mIoU, Vanilla-Conv: 81.3% mIoU, baseline: 81.1% mIoU)! \nThe reason is that CDC tends to enhance all the edges including the pseudo boundaries (such as inner-object texture), which disrupts the semantic segmentation task. By contrast, benefiting from the semantic difference term in our SDC, only the real inter-class boundaries are strengthened, which is compatible with the goal of the semantic segmentation task.\n\n---\n\n**3: Does the benefit of the semantic difference term for intra-class boundaries rely on the scale of convolution kernels and the quality of the semantic feature?**\n\nSincerely thank you for your interesting question! We hope to dispel your concerns through our answers! \n \n***(1) Effect of the kernel size and dilation rate of SDC:*** \\\nWe conduct experiments under different choices of the kernel size and dilation rate of SDC in the table below. It can be found that:\n+ Increasing the size of the convolution kernel in SDC does not bring significant improvement (82.9 $\\to$ 83.0).\n+ Excessive dilation rate leads to negative effects (82.9 $\\to$ 82.8 $\\to$ 82.5). \n\nIn fact, large kernels and dilated convolution are mainly used for enlarging the receptive fields.\nWhile, the goal of our SDC is to infer the inter-class boundary cue in a local region, which may not require more contexts brought by large kernels. Similarly, a large dilation rate may lead to the lack of local information, which is critical for local boundary exploration.\n\n| kernel size | dilation rate | mIoU |\n| :----: | :----: | :----: | \n| $3\\times3$ | 1 | 82.9 | \n| $5\\times5$ | 1 | 83.0 | \n| $3\\times3$ | 3 | 82.8 | \n| $3\\times3$ | 5 | 82.5 | \n\n***(2) Effect of the quality of semantic feature:*** \\\nDefinitely, we found that the quality of semantic features would significantly affect the benefits brought by semantic difference term. \nAs we know, high-level features tend to contain more semantic information, so we design SDM as a neck-part to refine the features of the lower-level stage by using the features from the higher-level stage. \\\nIn Table 5b, we have compared the different choices of semantic feature $F_i^s$ in $i$-th SDM. For the clarity, we list the main results in the table below, where $F_{i}$ means the output feature of stage $i$ in backbone, which is also the input feature of $i$-th SDM. \nIt can be seen that:\n+ The segmentation performance will be seriously degraded when the semantic feature of SDC input is implemented by the low-level feature $F_{1}$.\n+ When using $F_{i}$ (i.e., the input feature) as semantic feature, the semantic feature has the same level semantics as the input feature, which may not exert the true power of SDC.\n+ With the higher-level feature $F_{i+1}$ at the next scale $i+1$ with richer semantics as the semantic feature, the performance achieves the best.\n\n| Semantic feature $F_i^s$ | mIoU |\n| :----: | :----: | \n| $F_1$ | 79.3 | \n| $F_i$ | 82.2 | \n| $F_{i+1}$ | 82.9 | \n\n\n", " Thank you for your appreciation! We will try our best to address your concerns! \n\n---\n\n**1: About the new parameters introduced by our approach.**\n\nA good question! \\\nActually, our SDM is a quite efficient and lightweight module.\nAs shown in Table 5(a), our SDM can bring +1.8% mIoU and +4.4% boundary F-score (1-px) for the well-known baseline segmentation model, OCRNet (HRNet-48), on Cityscapes-val dataset, while only introduces 0.1G FLOPs and 3.1M parameters. \\\nConsidering the significant performance gain, we believe that such few additional computational cost is well worth.\n\n---\n\n**2: Comparisons between SDC and CDC.** \n \nThis is a valuable question! We will add the full comparisons between them into the revised version. \\\nNext, we analyze the differences between SDC and CDC from the following four aspects: \n+ **Motivation:** \n - CDC is proposed for detecting the edge information effectively, which is sensitive to all edges including both inter-class boundaries and inner-class boundaries, thus can not be applied for semantic segmentation tasks. \n - While, our SDC is specially designed for semantic segmentation tasks, which can enhance only inter-class boundaries and suppress inner-class boundaries. \n+ **Formulation:** \n - As shown in Eq.(2) and Eq.(3), compared with CDC, our SDC has an additional term, i.e., semantic difference term, which is simple but the most critical point to filter out the perception of pseudo-inner-class boundaries. \n+ **Application:** According to their properties, SDC and CDC have completely different application scenarios. \n - With the ability to perceive only inter-class boundaries, our SDC is more suitable for various visual tasks to distinguish objects with different classes, such as semantic segmentation. \n - By contrast, CDC is better suited for detecting class-agnostic edges and has been widely used for edge detection and face-spoof detection tasks. \n+ **Theoretical analysis:**\n - Inspired by image filtering theory, we modeled feature fusion and boundary enhancement of SDC/ SDM as an anisotropic diffusion model in classical physics. Further, we give a theoretical explanation of SDC/SDM, that is, SDC/SDM acts as a stable and efficient PDE solver. The details are updated in the revised supplementary material. \n - In contrast, CDC is derived from the gradient operator/Laplacian operator in traditional image processing. \n\n---\n\n**3: Further clarification of the effectiveness of the semantic difference term (SD-term).**\n\nThanks again for your advice! \\\nWe will give more explanations in terms of the following three aspects: \n+ **Empirical analysis:** Compared with features of shallow-layer, deep features generally contain higher-level semantic information, which is closer to category-level and object-level. The SD-term in SDC uses the semantic feature to selectively filter out false boundaries and enhance real boundary features. For two different pixels inside the same object, their high-level semantic features tend to be similar. In this case, the SD-value will be small, and the SD-value will suppress the Pixel-wise Difference item, which plays the role of suppressing the inner-object variation. In the face of two pixels located in objects of different categories, their high-level features are often different, and the SD-value will be very large. In this case, even if the Pixel-wise difference of the low level may be small, the large SD-value will highlight and strengthen it, which plays a role in strengthening the boundary of the real category. \n+ **Ablation experiments:** We also conducted ablation experiments to verify the effectiveness of SDC (SD-term). For details, see Table 4 (a). After we replace the SDC operator in the SDM module with CDC(CDC is without the SD-term), the final performance will plummet from 82.9% mIoU to 80.4% mIoU! \n+ **Theoretical explanation:** Inspired by the image filtering theory, our SDC/SDM actually models the anisotropic diffusion process in classical physics. Further, we give a theoretical explanation of SDC/SDM, that is, SDC/SDM acts as a stable and efficient PDE solver, and SD-term acts as an input in the diffusion coefficient function. The details were updated in the revised supplementary material. \n", " This paper proposes a novel convolution-based operation/operator that could utilize the semantic features in an efficient and effective way for the segmentation task. The main contributions of this paper are (1) a new operation (semantic difference convolution, SDC) that is effective in the segmentation of target objects with precise boundaries, (2) the related module (SDM) to enhance the boundary information at the feature level, and (3) the proposed network exhibit the state-of-the-art semantic segmentation performance with precise boundaries of the target objects. \n1. Strengths\n- The reviewer significantly understands the significance and importance of the task proposed in this paper. Segmenting various objects in real-world images is significantly important for many applications. Especially, clear and precise boundaries are required for the precise semantic segmentation task. \n\n- Reproducibility of the manuscript. The manuscript is well organized in terms of exhibiting the hyper-parameters and model architecture. \n\n- The manuscript is well organized and well written.\n\n2. Weakness\n- More detailed descriptions are illustrated in the “Question” and “Limitation” sections. Please see below.\n\n * Questions & Discussions\n1. The semantic distance (SD in equation 3) should be discussed. The SD is calculated using the Euclidean distance of two semantic features (f) with different positions. Intuitively, SD value by similar feature exhibits small value, whereas SD value exhibits magnified values when calculating boundaries. At this point, the reviewer is curious about the followings:\n\n- 1-1. The authors’ address is that f exhibits semantic feature, and the Euclidian distance between two f exhibits semantic distance. Here, the definition and the mathematical illustration for the semantic feature and its meaning should be discussed, or a reference is required. \n\n- 1-2. The normalization method (even without) can extremely suppress the values of the inner parts of the objects. In general, the color and semantic features significantly exhibit similar values in the same objects. Therefore, the SD value inside the object can be almost 0. Whereas the SD values nearby boundaries could be extremely magnified. Therefore, the reviewers worried that extremely small inner values degrade feature extraction for semantic features related to the target objects and only focus on the boundary-oriented features, and thus the predicted segmentation mask includes big halls inside.\n\n2. Since the convolution operation can be utilized in many applications, the reviewer is curious that the motivation of the proposed operation in the semantic segmentation task. The proposed operation can be used in general applications such as classification and image generation tasks.\n 1. Major issues\n- The limited novelty should be discussed. As illustrated above (Question), the proposed operation exhibits limited novelty. Despite the simple yet effective mathematical illustration of the proposed convolutional operation, the strict mathematical modeling for the operation and mathematical analysis of why the proposed operation exhibits outstanding performance in segmenting objects’ boundaries should be discussed. \n\n- The experiments should be improved. As the reviewer already understands, the evaluation metric of “mean Intersection over Union (mIoU)” can qualitatively measure the object details. However, recent studies [1-3] proposed new evaluation metrics to measure the object detail (especially boundaries of the target objects) quantitatively. Otherwise, the visualization of the feature map could illustrate the novel feature extraction when processing object details. Please refer to the activation maps [4]. To clear the authors’ addresses, in terms of the improved boundary-oriented segmentation, more experimental or mathematical evidence should be justified.\n\n- Since the authors proposed the convolution-based operation, the mathematical modeling of the operation should be discussed with the parameters used for the previous convolution operation. The mathematical modeling can include the rank, dimension, size, and inputs (parameters). For instance, the size of the output feature map can be determined as --- when the padding parameter is “valid” or “same”, or when using dilation parameters. The author should illustrate the mathematical form to calculate the output shape using kernel size, padding, and strides (additionally dilation). Please refer to this page [5] for the mathematical form.\n\n\n[1] Fernandez-Moral, Eduardo, et al. \"A new metric for evaluating semantic segmentation: leveraging global and contour accuracy.\" 2018 IEEE intelligent vehicles symposium (iv). IEEE, 2018.\n\n[2] Lee, Kyungsu, et al. \"Boundary-oriented binary building segmentation model with two scheme learning for aerial images.\" IEEE Transactions on Geoscience and Remote Sensing 60 (2021): 1-17.\n\n[3] Cheng, Bowen, et al. \"Boundary IoU: Improving object-centric image segmentation evaluation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n[4] Selvaraju, Ramprasaath R., et al. \"Grad-cam: Visual explanations from deep networks via gradient-based localization.\" Proceedings of the IEEE international conference on computer vision. 2017.\n\n[5] https://www.tensorflow.org/api_docs/python/tf/nn/conv2d\n\n2. Minor issues\n- The reviewer recommends reviewing the grammar and typo errors to improve the quality of the manuscript.\n", " This paper proposes an efficient boundary-aware convolution operator to boost the boundary modeling capacity for semantic segmentation. The proposed operator is sensitive to the inter-class boundary while ignoring the noisy intra-class pseudo-boundaries. Based on the proposed operator, a lightweight module is designed to enhance boundary-related information, which is flexible to be plugged into any existing encoder-decoder segmentation model. Experimental results validate that the proposed approach can achieve consistent improvements for boundary regions over several baselines.\n\n Strength:\n-\tThis paper proposes an efficient boundary-aware convolution operator to boost the boundary modeling capacity for semantic segmentation.\n-\tThe proposed approach shows promising performance for inter-class boundaries.\n-\tComprehensive ablative studies are conducted to show the effectiveness of the proposed approach.\n\nWeakness:\n-\tThe proposed approach may introduce new parameters to the network.\n-\tThe proposed Semantic Difference Convolution (SDC) is similar to Central Difference Convolution (CDC).\n-\tThe effectiveness of the semantic difference term needs more clarification.\n -\tThe further analyses in Table 5a are appreciated, it will be more convincing if some details are given. How to calculate $\\rou$ in this table? How many percents of more parameters are introduced by the proposed approach for each convolution layer? Does CDC also introduce more parameters to the network?\n-\tAbout the pure improvement of the semantic difference term: If we replace SDC with CDC in the proposed approach, how well does it perform?\n-\tDoes the benefit of the semantic difference term for intra-class boundaries rely on the scale of convolution kernels and the quality of the semantic feature?\n The authors adequately addressed their work's potential negative social impact.", " This paper studies the problem of how to improve the semantic segmentation results for existing architectures. The authors propose a semantic difference guided convolution, so-called SDC, to deliberately enhance the representations in the semantic boundary area. Specifically, it enhances current layer’s kernel with the the next layer’s feature differences, instead of using current layer’s differences like previous methods, which means that the next layer’s features could serve as a semantic guidance to the current layer. The proposed module is claimed to be flexible to co-operate with any ‘encoder-decoder’ architectures. And the effectiveness is verified through extensive experiments. ** Strength ** \n\n+: The paper is well written and easy to follow.\n\n+: The insight of enhancing the semantic boundary through next stage’s differences is interesting and has somewhat novelty for the semantic segmentation. Can this method generalize to other dense estimation task? \n\n+: The authors fully demonstrate the effectiveness of the proposed method on various datasets.\n\n** Weakness **\n\n-: The wording of “intra-class boundary“ is not precise enough. According to my understanding, the semantic difference is provided by the next layer’s feature, not the output layer. And only the output layer corresponds to the class-level information exactly. Especially in the early layer, two approaching pixels with different features do not imply they belong to different classes.\n\n-: There seems to be no normalization to the SD(*). Would this induces unstable training if the SD(*) generates a quite large number? \n\n-: In table 5(b), it would be more convincing if the author organizes the comparisons as “Baseline, Baseline+Ours, Baseline+DenseCRF, Baseline+DenseCRF+Ours, Baseline+SegFix, Baseline+SegFix+Ours, Baseline+InverseForm, Baseline+InverseForm+Ours”.\n Please refer to the Strengths and weakness. The authors have pointed out the limitations of their work.\n\nSince it is just a basic method to deal with 2D semantic segmentation, it has nothing to do with the potential negative societal impact.\n", " Overall, this is a good work.\n\nIn this paper, a new semantic segmentation pipeline is presented, based on proposed semantic difference convolution. The semantic difference convolution incorporates higher-level semantics into the lower-level convolution and helps learn better features which are sensitive to semantic boundaries. The proposed method is easy to implement on existing feed-forward segmentation baselines. According to a set of experiments on previous segmentation models including DeepLabv3, OCRNet and MaskFormer on 4 public datasets, the effectiveness of the proposed method is clearly demonstrated. Besides, this paper also provides analysis on the computational complexity and various comparisons regarding the configurations, as well as many ablations to show the SOTA results from the proposed method.\n\nThis paper is also well-organized, that I can follow their draft easily. Even though there are a few comments from my side, that I believe it could be improved, this is a good work for NeurIPS.\n\nAfter reading all the comments and authors' feedback, I vote for accept (7) on this submission. Strengths:\n+ This paper is well organized. It is overall easy to follow.\n+ This paper proposes a new convolution operation, which is simple to implement. Based on the proposed semantic difference convolutions (SDC) and SDM, more favorable segmentation results can be achieved. To show the effectiveness of the proposed method, the authors show different network backbone as well as segmentation models.\n+ The proposed SDC achieves more improvements compared with previous boundary-aware methods. Further, the SDC is compatible with other techniques, such as DenseCRF, SegFix.\n+ From the analysis and empirical evaluation, the proposed SDC will not increase the computational cost significantly. \n\nWeakness:\n- In section 3.2, it needs to discuss more details of f_{pi}. What is f_{pi}? Even though readers can understand when they see section 4.2 and Figure 4, it is a little confused in section 3.2. More explanation on the semantic information f_{pi} will be helpful.\n- Lack of comparison with more boundary-aware methods, such as Gated-SCNN.\n- In this work, DeepLabv3, OCRNet and MaskFormer are applied as baseline models. Why not apply the proposed method on the current state-of-the-art model to achieve new SOTA result, for example SegFormer? 1. This paper applies DeepLabv3, why not DeepLabv3+?\n2. In Table 5 (b), why is the baseline performance only 65.1? What is the baseline here? More details are needed. Besides, why not apply a stronger baseline in Table 5? No. This work is a fundamental research in computer vision." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "HpTr6xyASsE", "HpTr6xyASsE", "PbmvXh5ylsT2", "--34I9uRzb_", "aymS7T9leO7", "O6j1khky9IU", "aVi__MHx_7d", "T1hWTRYNTIE", "lHqnuTWkErG", "vS9ZHl1gGB", "vS9ZHl1gGB", "47bxz-N6N9O", "XO1wjhUFHEj", "RxJH8D_mN7z", "RxJH8D_mN7z", "RxJH8D_mN7z", "vS9ZHl1gGB", "vS9ZHl1gGB", "vS9ZHl1gGB", "vS9ZHl1gGB", "XO1wjhUFHEj", "47bxz-N6N9O", "47bxz-N6N9O", "nips_2022_mmzkqUKNVm", "nips_2022_mmzkqUKNVm", "nips_2022_mmzkqUKNVm", "nips_2022_mmzkqUKNVm" ]
nips_2022_q__FmUtPZd9
Social-Inverse: Inverse Decision-making of Social Contagion Management with Task Migrations
Considering two decision-making tasks $A$ and $B$, each of which wishes to compute an effective decision $Y$ for a given query $X$, can we solve task $B$ by using query-decision pairs $(X, Y)$ of $A$ without knowing the latent decision-making model? Such problems, called inverse decision-making with task migrations, are of interest in that the complex and stochastic nature of real-world applications often prevents the agent from completely knowing the underlying system. In this paper, we introduce such a new problem with formal formulations and present a generic framework for addressing decision-making tasks in social contagion management. On the theory side, we present a generalization analysis for justifying the learning performance of our framework. In empirical studies, we perform a sanity check and compare the presented method with other possible learning-based and graph-based methods. We have acquired promising experimental results, confirming for the first time that it is possible to solve one decision-making task by using the solutions associated with another one.
Accept
Strengths: * novel formulation for task migration in social management tasks * theoretical analysis: generalization bound * results shed light on certain possible design choices * adequate empirical evaluation on simulated data Weaknesses: * formalization may be too restrictive to capture realistic settings (e.g., observing only one type of task) * connections to some related literature not clearly established * some concerns regarding baselines used in experiments, or lack thereof * societal implication not discussed or properly acknowledged in authors’ response **(see ethics section below)** Summary: All reviewers agree that the proposed problem of task migration for network diffusion is interesting, and that the proposed formal framework is elegant; some reviewers, however, found the framework to be somewhat restrictive in its ability to relate to real-world diffusive processes. Theoretical results seem sound, but to fully appreciate their novelty, it would help if the authors provide more details and make concrete connections to the literature on which they draw. The authors’ response to reviewers questions were helpful in this regard. Experimental results look encouraging; nonetheless, several reviewers raised concerns regarding the adequacy of certain baselines, or the lack of comparison to methods that include explicit diffusion modeling. Authors’ responses in this regard were only partially satisfactory. **Ethics:** Reviews by two designated ethics reviewers strongly suggest that the paper frames the decision task it studies in a very one-sided manner, presenting mostly possible benefits (e.g., minimizing the spread of misinformation) and lacking to acknowledge for its evident risks (e.g., maximizing the spread of misinformation). The dual objectives studied—diffusion enhancement (Problem 1) and containment (Problem 2)—make it very clear that virtually every concrete task in this space has potential dual use. **Unfortunately, the authors do not discuss this in the paper, nor were their responses in the discussion appeasing.** Due to this, acceptance is made conditional on the authors’ ability to clearly convey in their writing, and *in the early parts of the paper*, the risks that naturally follow from their proposed work, as they relate to the societally-significant applications they discuss.
train
[ "v6NF7-09kW", "7jl01F0qUR0", "fJZr3E8JJXP", "kk980vE8zBJ", "VC-IZ6BDak18", "rEH2gZhkGLR", "y0ffXENGzUW", "1MEFkrn505M", "unOKGH_oOhl", "zE_Cw3hc8NP", "padG3XRFMr", "7KI2qgB2ds" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Authors,\n\nThank you very much for your in depth response I am satisfied with your response and have upped my score as a result.\nI think the main piece of intuition I'd like to see added would be for the stochastic diffusion model defined in 2.1, and the various tasks defined in 2.2. As someone that was unfamiliar with that area it took me a bit of time to parse what ended up being rather natural definitions for the problem setting in the end.\nSince they make heavy use of graphs I think visual illustrations of these problems would be most beneficial (e.g. tikz figures).", " Thank you very much for your detailed response. I have no further comments and will keep my score. ", " I understand why the paper was flagged for inappropriate potential applications and impact. There is definitely a lot of value of the proposed work grounded on potential positive applications (e.g. curb misinformation), but this a case of potential dual usage in which the same technology could be used, for example, by governments seeking to counter pro-democracy movements, human rights advocates, and other types of community organizing. The authors acknowledge in their response: \"Clearly, misuse of such methods can lead to consequences such as limiting people's access to opportunities. While our research focuses on modeling and theoretical analysis, we agree with the reviewer that this paper should have discussed such issues, and we are happy to do so in our revisions.\" As the authors note in their response, the work should explicitly acknowledge the potential misuses of the proposed work.", " The authors propose a way of utilising inverse decision-making with task migrations towards a development of a generic framework for addressing decision-making tasks in social contagion, e.g. diffusion enhancement / diffusion containment. In practical terms, as given by the authors, these types of methods could be applicable to help with scenarios involving fighting misinformation and suppressing violence-promoting messages, as well as facilitate marketing campaigns.\n\nThe breadth and significance of such applications make it imperative to carefully assess the impact of this work and to provide an ethics statement that dives into such opportunities and risks.\n\nAfter all, isn't this dual-use? We can say that it may help fight misinformation, but wouldn't it be just as useful in maliciously spreading misinformation and facilitating state-sponsored propaganda by authoritarian regimes? It's just a question of the intentions of the user, as well as the type of content - the method itself is agnostic to whether the use is good or bad, no?\n\nDual use does not automatically invalidate this (and similar work), but the very conscious decision to publish and publicise such results at venues where they are likely to be getting a lot of attention - not only from good actors - does have ethical implications.\n\nWith that in mind, the authors need to provide a detailed ethics statement, a risk assessment, and consider risk mitigations.\n\n The authors did not provide an ethics statement that is required for a piece of work on such an important and far-reaching topic. At the first glance, it seems like these methods are dual-use, making it very unclear why it is believed that they would not be misused by bad actors in the future. I would strongly recommend for the authors to think on the implications of their work and compose a detailed ethics statement and an in-depth ethics assessment regarding this piece of research.\n\nGiven that social contagion management tasks include both diffusion enhancement and diffusion containment, and can consequently be used either for fighting discrimination and dangerous speech, they can just as easily be used for spreading misinformation and hate speech and causing harms to marginalised groups, if used by bad actors. It is imperative to consider the consequences of releasing generic social contagion management frameworks and making this knowledge open, contextualizing this research given the known ongoing and historical cases of social media being weaponized for propaganda and radicalization.\n\nAs it stands, the paper fails to engage with these implications.\nThe final decision on the publication should take into account these additional justifications, hopefully provided by the authors in the interim.", " We appreciate the reviewer for their time and comments, especially for pointing out the ethic concerns.\n\n- $\\textbf{Presentation.}$ We share the same view with the reviewer that this paper comes with heavy notations and technical concepts, which is primarily because the presented analysis is at the confluence of social contagion management and machine learning theory, which are traditionally not very close to each other. We will carefully check the paper and add more explanations. In this regard, it would be very helpful if the reviewer could advise on the concepts that need more explanations.\n\n- $\\textbf{On Gaussian prior.}$ The reviewer is correct that introducing the Gaussian distribution does not fundamentally change the learning outcome. Directly using the learned weights can be taken as a special case that uses the mean of a Gaussian. Following the convention of PAC-Bayesian [a], we wish to study the general case in the theoretical analysis. \n\n- $\\textbf{Data type and significance.}$ Traditional methods for data-driven decision makings typically follow a two-step framework: learn a decision model from data, and then solve the optimization problem based on the learned model. For example, one can utilize samples of decision-impact pairs to first learn the diffusion model, and then solve the management tasks [b]. Taking a further step, we study the decision-making diagram using query-decision pairs with task migrations for at least two reasons: first, learning the decision model can be another challenging task, which may be unnecessary if one can access query-decision pairs; second, inferring high-quality decisions directly from observed query-decision pairs is not only of theoretical interest but can also open new research avenues in designing decision-making pipelines. This paper offers a primitive formal study on a specific application, hoping to inspire applications beyond social contagion management, for example, the direct perception framework [c] and DARPA's subterranean challenge [d]. \n\n\n- $\\textbf{Generalization error.}$ The generalization bound is taken with respect to the latent data distribution ($D$ in Equation 6), and it characterizes the prediction accuracy in expectation (over the entire domain). The inference for a future query is based on the model learned using the entire sample set. \n\n- $\\textbf{On beta and binary loss.}$ The beta introduced in Equation 10 controls the tightness of the margin, in contrast to the existing works where the slack of the margin is constant [e]. Theorem 1 implies that a larger beta leads to a larger empirical risk but a smaller regularization term. In that sense, beta offers a means of balancing different terms, thereby making the true approximation error tunable (Theorem 2). A binary loss coupled with our margin makes the empirical error scaled by the margin difference; combined with Theorem 2, this indicates that the empirical error is in fact scaled by the approximation error, which allows us to factor the sub-optimization problem into the learning process. We thank the reviewer for pointing out the above two issues, and we will add more clarifications to the manuscript. \n\n- $\\textbf{On Naïve Bayes.}$ It is true that Naïve Bayes is more effective than most of the baselines. One plausible reason is that Naïve Bayes makes less strong assumptions compared to deep architectures (DSPN and GNN), and therefore, it offers better generalization performance because of Occam's razor principle. \n\n- $\\textbf{Ethic concerns.}$ The information containment we study here is inherited from the study of rumor blocking and misinformation prevention (e.g., [f, g]). It aims to stop/slow the spread of negative cascades but is not designed as a means of regulating or manipulating online speeches. Clearly, misuse of such methods can lead to consequences such as limiting people's access to opportunities. While our research focuses on modeling and theoretical analysis, we agree with the reviewer that this paper should have discussed such issues, and we are happy to do so in our revisions. \n\n[a] McAllester, David. \"Simplified PAC-Bayesian margin bounds.\" In Learning theory and Kernel machines, 2003.\n\n[b] Du Nan et al. \"Influence function learning in information diffusion networks.\" In ICML, 2014.\n\n[c] Chen Chenyi et al. \"Deepdriving: Learning affordance for direct perception in autonomous driving.\" In ICCV, 2015.\n\n[d] Rouček Tomáš et al. \"Darpa subterranean challenge: Multi-robotic exploration of underground environments.\" In MESAS, 2019.\n\n[e] Wu Yuanbin et al. \"A learning error analysis for structured prediction with approximate inference.\" In NeurIPS, 2017.\n\n[f] Saxena Akrati et al. \"Mitigating misinformation in online social network with top-k debunkers and evolving user opinions.\" In WWW, 2020.\n\n[g] Budak Ceren et al. \"Limiting the spread of misinformation in social networks.\" In WWW, 2011.\n", " We thank the reviewer for their time and constructive comments. In what follows, we will attempt to address some of the concerns.\n\n- $\\textbf{Related work.}$ We share the same feeling with the reviewer that the topic studied in this paper may not be perfectly classified into any of the existing research areas, and our work is at the intersection of social network analysis, combinatorial optimization, and learning theory. We follow the high-level idea of using a score function to quantify the decision qualities, which is inspired by the classic structured prediction, but our entire framework (as well as the proofs) cannot be immediately acquired from any of the existing works. We will re-organize Section F in the supplementary materials to better explain the connections between different components. \n\n- $\\textbf{Novelty.}$ Traditional methods for data-driven decision making typically follow a two-step framework: learn a decision model from data, and then solve the optimization problem based on the learned model. For example, one can first learn the diffusion model, and then solve the management tasks [a]. In contrast to the two-step framework, we seek to build decision-making pipelines using query-decision pairs for at least two reasons. First, learning the decision model can be another challenging task, which may be unnecessary if one can access query-decision pairs. Second, learning high-quality decisions directly from observed query-decision pairs is not only of theoretical interest but can also open new research avenues in designing decision-making diagrams. Following this branch, the main novelty of this paper is to introduce the concept of task migration, together with the proposed learning framework and the presented analysis. In addition, we are excited about the acquired experimental results, confirming for the first time that task migrations are indeed manageable. We will rephrase the introduction to further clarify the novelty based on the reviewer’s comments. \n\n\n- $\\textbf{Generalization to other domains.}$ We agree with the reviewer that it is not super clear whether the proposed approach can be applied to other application sectors. It is our aspiration to explore the issue of task migrations in more general settings or other applications, and this paper attempts to offer a primitive formal study on a specific application. In this regard, we are motivated by other applications where the idea of query-decision regression (as well as the task migration issue) may be of interest: one is the direct perception framework [b] that seeks to compute a safe driving (decision) based on the sensor inputs (query), without learning the perception modules; another application is the DARPA Subterranean Challenge [c], where the underground surface (analogous to the diffusion model under our context) is hidden and therefore query-decision regression can be a potential decision-making diagram. \n\n- $\\textbf{Representation.}$ We thank the reviewer for pointing out the representation issues, and we are happy to add more details and intuitions to make the paper more accessible to a wider group of audience.\n\n\n- $\\textbf{Task migration vs. transfer learning.}$ To our knowledge, task migration happens when one wishes to solve one optimization problem using query-decision pairs associated with another optimization problem, while transfer learning in general deals with the case when training data and testing data are generated from different distributions/domains [d]. \n\n- $\\textbf{Miscellaneous.}$ The reviewer is correct that our method is also a learning-based method. Our experiments involve our method (Social-Inverse), other learning-based methods (NB, GNN, and DSPN), a graph-based method (High-Degree), and a random baseline. We will clarify this point in our revisions. \n\n[a] Du, Nan, Le Song, Manuel Gomez Rodriguez, and Hongyuan Zha. \"Scalable influence estimation in continuous-time diffusion networks.\" Advances in neural information processing systems 26 (2013).\n\n[b] Chen, Chenyi, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. \"Deepdriving: Learning affordance for direct perception in autonomous driving.\" In Proceedings of the IEEE international conference on computer vision, pp. 2722-2730. 2015.\n\n[c] Rouček, Tomáš, Martin Pecka, Petr Čížek, Tomáš Petříček, Jan Bayer, Vojtěch Šalanský, Daniel Heřt et al. \"Darpa subterranean challenge: Multi-robotic exploration of underground environments.\" In International Conference on Modelling and Simulation for Autonomous Systems, pp. 274-290. Springer, Cham, 2019.\n\n[d] Weiss, Karl, Taghi M. Khoshgoftaar, and DingDing Wang. \"A survey of transfer learning.\" Journal of Big data 3, no. 1 (2016): 1-40.\n\n", " First of all, we thank the reviewer for their time and comprehensive comments. \n\n- $\\textbf{Data type.}$ It is indeed more realistic that the platform may observe a mix of two or more types of tasks instead of observing only one type. In the presence of multi-type observations, the proposed framework cannot be directly applied, and our feeling is that new algorithms and analysis must be designed, which seems technically non-trivial. For example, the constraints in the structured SVM need to be updated to accommodate different task types, and as a result, the cutting plane algorithm has to be re-designed to prioritize the constraints. We thank the reviewer for pointing out this issue, which we believe is an important future direction. \n\n- $\\textbf{Benchmarks on specific models.}$ It is true that the benchmark based on specific models would be a nice one, and it is also a very interesting idea to first learn an IC or LT model based on different realizations and then consider task migrations. We thank the reviewer for these inspiring comments. We were not able to find simple benchmarks because the existing works rely on the assumption that the diffusion model is known. On the other hand, considering that the materials in this paper are already heavy, we have limited our experiments to synthetic models. We hope to implement the reviewer's ideas in the following works.\n\n- $\\textbf{On Remark 1.}$ The classic linear threshold (LT) model proposed in [a] was designed for single-cascade diffusion, and in the single-cascade case, our model generalizes the LT model because the LT model can be equivalented described as to first sample a live-edge graph and then execute the deterministic diffusion process [a]. We will revise our paper with more clarifications. As for the multi-cascade case, we assume that a node can be activated by only one cascade in order to model the cascade competition; otherwise, influence containment might be impossible as two cascades will spread independently. In another issue, a user can sometimes observe not only the neighbor’s decision but also the associated cascade; for example, when a user likes/retweets a message with misinformation, we may say that they are activated by the misinformation cascade. However, we must admit that user behaviors in real networks are much more complicated than our modeling assumptions.\n\n- $\\textbf{On beta.}$ Beta can be systematically tuned (by modifying the constraints in Equation 16), and a brief discussion can be found in the supplementary material E.3. As seen from there, the impact of beta seems limited, and the results are robust.\n\n- $\\textbf{On ER graph.}$ We do not have a firm answer, but one plausible reason is that compared to other graphs, the ER graph is generated by uniformly selecting neighbors and therefore lacks salient graph structures, which makes the correlation between the query and decision hard to learn without using a good hypothesis space. Our method suffers less because our hypothesis space ensures to contain functions that can well approximate the latent objective function (Lemma 3 in the proof of Theorem 2). \n\n- $\\textbf{Miscellaneous.}$ The diffusion graph is labeled with transmission distributions, and it becomes a weighted subgraph after realization sampling. We thank the reviewer for pointing out the typo as well as the suggestions regarding distinguishing two types of decision makings. We will address them in our revision. \n\n[a] Kempe, David, Jon Kleinberg, and Éva Tardos. \"Maximizing the spread of influence through a social network.\" In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 137-146. 2003.\n", " We thank the reviewer for their time and constructive comments.\n\n- $\\textbf{Neurips’ audience.}$ We agree with the reviewer that some technical contributions of this paper may be better received at other conferences: for example, the proposed algorithms may be of interest to the STOC community, and our analysis of the generalization bounds falls into the central topics of COLT. We wish to submit this work to Neurips because we believe that Neurips is more interdisciplinary than other top venues. For example, social influence has attracted a fair amount of attention from Neurips researchers [a, b]; a few seminal works of data-driven decision making (and inverse engineering) are from Neurips [c, d, e]; our theorems are inspired by existing works that are also published at Neurips [f, g]. We believe that our findings may interest researchers from multiple Neurips communities.\n\n- $\\textbf{Presentation.}$ We thank the reviewer for this comment, and we are happy to add more intuitions/examples to make this paper more accessible to Neurips’ audience, by leveraging the extra page of the cameral-ready version. In this regard, it would be very helpful if the reviewer could advise on the definitions that need further clarifications. \n\n- $\\textbf{Theoretical assumptions vs. real-world practices.}$ We share the same view with the reviewer that the assumptions of this work only remotely mirror real-world practice, and this is mainly because the details about how real social media are operated are often not available to the public. The two considered tasks, information enhancement and information containment, are designed to abstract strategies for launching advertising campaigns and misinformation prevention. Such applications have been widely studied and implemented by real-world network platforms [g]. Furthermore, from a high-level perspective, we believe that task migration is a future topic in developing data-driven pipelines (for the general case). It is our aspiration to explore the issue of task migrations in more general settings or other applications, and this paper attempts to offer a primitive formal study on a specific application. \n\n[a] Du, Nan, Le Song, Manuel Gomez Rodriguez, and Hongyuan Zha. \"Scalable influence estimation in continuous-time diffusion networks.\" Advances in neural information processing systems 26 (2013).\n\n[b] He, Xinran, Ke Xu, David Kempe, and Yan Liu. \"Learning influence functions from incomplete observations.\" Advances in Neural Information Processing Systems 29 (2016).\n\n[c] Donti, Priya, Brandon Amos, and J. Zico Kolter. \"Task-based end-to-end model learning in stochastic optimization.\" Advances in neural information processing systems 30 (2017).\n\n[d] Wilder, Bryan, Eric Ewing, Bistra Dilkina, and Milind Tambe. \"End to end learning and optimization on graphs.\" Advances in Neural Information Processing Systems 32 (2019).\n\n[e] Dong, Chaosheng, Yiran Chen, and Bo Zeng. \"Generalized inverse optimization through online learning.\" Advances in Neural Information Processing Systems 31 (2018).\n\n[f] Wu, Yuanbin, Man Lan, Shiliang Sun, Qi Zhang, and Xuanjing Huang. \"A learning error analysis for structured prediction with approximate inference.\" Advances In Neural Information Processing Systems 30 (2017).\n\n[g] Rahimi, Ali, and Benjamin Recht. \"Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning.\" Advances in neural information processing systems 21 (2008).\n\n[h] Wingfield, Nick, Mike Isaac, and Katie Benner. \"Google and Facebook take aim at fake news sites.\" The New York Times 11 (2016): 12.\n", " The authors consider the problem of task migration for decision making tasks in social networks for the purpose of managing contagion events. The authors begin by formally defining a social network (and associated social contagions) as a directed graph of users, where a set of seed users initiate a social contagion, with contagions spreading through edges at a rate defined by a distribution on each edge.\n\nThe authors then define four different contagion management problems, the last of which is the task migration problem. In this setting, we have two contagion management tasks (source and target) on the same social network and have samples from the source task. The goal is to minimize a loss with respect to the target task using those source task samples.\n\nThe authors propose an algorithm for this task migration problem which they dub social-inverse. They bound the generalization error of social inverse and compare their algorithm in simulated experiments to supervised learning algorithms and a heuristic that selects high-degree users as seed nodes. This paper addresses an important problem (task migration) in a very general setting that can be applied to many different instances (contagion management in social networks). Unfortunately I am unable to leave an educated review since this paper seems to be targeted at a highly technical research sub-community which I have had no prior exposure to.\n\nMy primary concern is that this paper might not be accessible to most of Neurips' audience and thus might be better received at more theoretical conferences like STOC or COLT. However, I do not feel as though this should be a disqualifying factor for acceptance, especially if there is a large enough community at Neurips that would find this paper useful.\n\nGoing through the theoretical results, the assumptions seem reasonable and I can't see any glaring red flags in their resulting theorems.\n\nI do not feel like I have enough context to critically interpret the significance of the improvements in the experimental results. \n\nOverall this paper seems sound to me, but I have concerns about the accessibility of the paper. Hence I recommend a borderline accept with very low confidence. I think the authors could significantly improve the paper if they spent some time providing more intuition in the first few sections and also added some concrete examples to aid the interpretation of their definitions. How closely do the assumptions of this work mirror real-world practices in social contagion management?\nFor example, the introduction states that \"For example, a network manager may need to work on a rumor blocking task, but they only have historical data collected in solving viral marketing tasks.\" are there any existing cases where such a network manager would employ any methods that are remotely similar to social-inverse instead of just proceeding based on intuition/industry best practices for their interventions? Limitations were well addressed by the author.", " The paper considers a semi-opaque-box influence network and the cumulative impacts from information dissemination at a few seed nodes. The network is semi-opaque in the sense that we know the general connectivity of the nodes, but not the actual realization of the influence, which is sampled from an unknown distribution with support by the observed connections. To make up for the observability gap, the paper suggests to use previous examples of information diffusion, collected in the form of which targets to reach (queries) and which seeds would maximize the coverage of the targets after diffusion (decisions). It then proposes to use random samples of possible realizations to solve a max-margin (SVM) optimization problem to find out the most likely linear combination of the realizations. The paper finally uses the learned weighted combination of realizations to decide for optimal solutions in new information diffusion tasks. Theoretical and empirical studies were included. Strengths:\n* Quality: The paper is nicely presented with thorough empirical studies.\n* Originality: The paper solves an essentially \"graph-kernel\" problem using inspirations from random kitchen sink. This sounds like a reasonable idea.\n\nWeaknesses:\n* Theory: The presentation is not entirely clear and I have some questions about some unexplained terms in the key algorithm and theoretical analysis.\n* Significance: We have to accept the assumption that the training data is presented as query-decision pairs, instead of the more commonly used decision-impact pairs, that is, the actual coverage caused by the seed nodes.\n* Ethics: The paper is missing ethics discussions on a potentially sensitive topic. * Line 172. Why do you introduce a Gaussian distribution after you learned the optimal combinations of realizations? Doesn't this introduce pure noise?\n* Line 185. Related, the generalization error considers the mean of the sample you use for inference, yet in the algorithm description, you use only one point in the sample for inference. Does this generalization error cover the full risk?\n* Line 191. I do not understand the use of beta. Also, why do you analyze a binary loss when your objective contains an optimization sub-problem?\n* Table 1. Can you elaborate on Naive Bayes? It seems to be the second-best in the considered baselines, just below High Degree. Overall, I can get some inspirations from the proposed method and empirical studies, but some key aspects in the algorithm were left unexplained, namely eta and the extra step of Gaussian sampling. The theory presentation does not meet the expectation of the NeurIPS community, perhaps because too many new concepts were introduced without good explanations.\n\nThough the paper is purely methodological, I flagged it for ethics reviews due to some word choices, such as information containment, which may lead to a limitation of people's access to opportunities. The authors should be also be advised on the creation of an ethics discussion section.", " This paper studies an inverse decision-making problem for task migrations on contagion management. Specifically, it investigates how prior decision-making on diffusion containment can be migrated to diffusion enhancement; and vice versa. It provides a theoretical analysis of how different diffusion management tasks can be correlated such that samples from one task can be useful for another task. It performs empirical analysis on four graphs, comparing with several benchmarks (used for supervised learning) to evaluate the performance. Strengths: \nI find the idea of task migration on contagion management novel and important (although I have some concerns about the detailed setup (see weakness 1). The paper provides rigorous theoretical analysis and shows empirical effectiveness on four datasets. \n\nWeaknesses:\n(1) I think the setup that the platform only observes one type of task (either diffusion containment or enhancement) does not seem realistic. It is more likely that the platform will observe a combination of both. I think this is a crucial question: How realistic is the task migration problem proposed in this paper? \n\n(2) Given that many models have been proposed in the diffusion literature (also cited by the paper), designing benchmarks w.r.t. based on these models is important to show the effectiveness of the methods, rather than other ML methods that do not know the underlying diffusion process. \n(1) In remark 1, the paper mentioned that the model generalizes popular diffusion models, including the linear threshold model, which requires a percentage or number of neighbors to adopt before the users adopt. However, in lines 79—80 on P2, the paper mentions that the node will only be activated by \"first in-attempting neighbors\" or activated by the cascade with the smallest index. These two scenarios both conflict with the linear threshold model. Why can't a node be activated by two or more cascades? In reality, a user only observes his/her neighbor's decisions, rather than which cascade resulted in the neighbors' decision. \n\n(2) L88–89, why is the subgraph weighted? From the description in Line 71—72, I think the subgraph is attributed instead of weighted? \n\n(3) There are two decision-making problems in the paper, one is the decision-making of users, and the other is the decision-making of the central planner (I believe referred to as agent by the paper). Given that both are important concepts yet are entirely different, I suggest distinguishing the two concepts to avoid confusing the readers, e.g., user decision-marking vs. planner decision-making). \n\n(4) \\beta is an important parameter that controls the trade-off between estimation and approximation errors. How can it be tuned? It seems that the experiment section directly uses beta = 1. If it cannot be systematically tuned, it might also help to show the robustness of the method w.r.t. different beta \n\n(5) Regarding the experiment, I think it will help add a benchmark, which assumes either a linear threshold model or an independent cascade. The parameters in these models can be learned based on different realizations. That is, since there is rich research on the diffusion model in sociology, it seems sensible to use them directly in such a migration task. \n\n(6) I wonder how realistic in practice do we assume the platform has information about all DC but no DE, or all DE but no DC? It seems more reasonable to observe a mix of both. Can you provide some insights on how the method performs (especially compared with the benchmarks) if this is the case? \n\n(7) Any insights on why the benchmarks perform this badly on ER? \n\n(8) Minor: There is a typo on line 84, \"For sample\" —> \"For example\"\n NA", " The paper presents a framework for task migration in the context of contagion management in social networks, in which, a target decision-making task is solved using the historical data collected from another task. The authors define a generic problem formulation that covers the tasks of diffusion enhancement as well as diffusion containment, and further derive a reformulated optimization problem that can be coupled with several classic algorithms to provide an approximated solution in a polynomial time. Strengths:\n\n+ Originality: The paper develops a novel framework for task migration between two classes of social management tasks with a generic problem definition and (re-)formulation of the objectives.\n\n+ Quality: The authors theoretically analyze the generalization ability of their approach across tasks and show that the generalization error is bounded. Subsequently, they identify potential conditions on the design choices that can affect the generalizability, and thus, some aspects/parameters are chosen accordingly. In addition, they conduct considerable empirical study evaluating the performance of their method in various setups and compared to several baselines.\n\nWeaknesses:\n\n- Significance: The proposed approach is not well located within the literature, and the discussion of the related work seems to be limited. Additionally, in terms of approaches, it is mentioned that the ideas are closely related to structure prediction methods (refs. 40-42), but the connection to those work is not clarified; what are the similarities/differences with those methods? in what sense the framework is novel compared to them? Moreover, the generalization of the approach to other domains and problems and the applicability of the work on real network data is not clear. As a result, the outcomes only impact a narrow scope which leads to limited significance.\n\n- Clarity: The paper is well written, but it is a bit hard to follow. The writeup could be improved in terms of structure and choice of content, Some details about the background information and algorithmic characteristic can be expanded.\n\n - How the concepts of \"task migration\" relate to \"transfer learning\"?\n\n- It is not completely clear what the authors mean by learning-based methods. Isn't your approach a learning framework?\n\n No discussion is provided in that regard. However, given the nature of the work, which is related to the social network analysis, a discussion about societal impact of the approach would be interesting. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 4, 3 ]
[ "1MEFkrn505M", "y0ffXENGzUW", "nips_2022_q__FmUtPZd9", "nips_2022_q__FmUtPZd9", "zE_Cw3hc8NP", "7KI2qgB2ds", "padG3XRFMr", "unOKGH_oOhl", "nips_2022_q__FmUtPZd9", "nips_2022_q__FmUtPZd9", "nips_2022_q__FmUtPZd9", "nips_2022_q__FmUtPZd9" ]
nips_2022_LCIZmSw1DuE
Fair and Optimal Decision Trees: A Dynamic Programming Approach
Interpretable and fair machine learning models are required for many applications, such as credit assessment and in criminal justice. Decision trees offer this interpretability, especially when they are small. Optimal decision trees are of particular interest because they offer the best performance possible for a given size. However, state-of-the-art algorithms for fair and optimal decision trees have scalability issues, often requiring several hours to find such trees even for small datasets. Previous research has shown that dynamic programming (DP) performs well for optimizing decision trees because it can exploit the tree structure. However, adding a global fairness constraint to a DP approach is not straightforward, because the global constraint violates the condition that subproblems should be independent. We show how such a constraint can be incorporated by introducing upper and lower bounds on final fairness values for partial solutions of subproblems, which enables early comparison and pruning. Our results show that our model can find fair and optimal trees several orders of magnitude faster than previous methods, and now also for larger datasets that were previously beyond reach. Moreover, we show that with this substantial improvement our method can find the full Pareto front in the trade-off between accuracy and fairness.
Accept
I recommend acceptance due to the strengths identified by the positive reviews, despite some doubts expressed by more negative reviews. This paper modifies existing dynamic programming approaches for learning decision trees to accommodate non-monotonic constraints, motivated in particular by group fairness. Experiment show that this approach is orders of magnitude faster than existing alternatives. The main unresolved reviewer concern is novelty---how much of a contribution is the ability to hand non-monotonic constraints? If this paper were exclusively targetted to the decision tree community, this would be an important concern. However, I view the significance of this contribution in terms of making decision trees a computationally tractable option for designing fair decisions. Interpretability is an important concern in many domains where fairness is an issue, and thus this is an important contribution.
train
[ "n6ec7S38Ihv", "Bpspo1x4WvY", "WWKVq1GSRO1", "3Gym-fmgA35", "JhIAK2dt_i", "Rpqwfz76Hs", "mnCW8j86YEl", "MGhYree5Sn0", "d47B1ZlR4Vx" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed and thorough response, you have clarified my doubts.", " Dear reviewer,\n\nThank you for your review of our work. Thank you also for your detailed feedback on some of our writing. Based on your comments we have been able to improve the clarity of our writing and explain better how this work contributes to the literature. We will here provide some detailed responses. (Your comments are put in italics, and our response follows afterward.)\n\n\n1. _From section 4 on, the paper does not flow very well. (...) I believe the quality of the upper/lower bounds directly affect the quality of the solutions found so I do not find the sentence \"These bounds can be trivial, ...\" in line 202 really convincing._\n\nThank you for the feedback on our method section. We have critically reexamined it and changed the order of explanation based on your feedback. We have also included a clearer explanation of how the bounds are obtained.\n\nSee also our response to your question 3 (Q3) under point 4 below.\n\n\n2. _(Q1) How good are the set of Pareto frontiers? Can it be that a solution that is not dominated by any other solution is not really a good one?_\n\nThe reviewer is correct that deciding which Pareto optimal solution to select in a generic way is difficult. We believe the responsible domain-expert is the only person that can decide on the trade-off between accuracy and fairness, since the specific application and context need to be considered. Therefore, the generation of the Pareto front is precisely one of the advantages of our method because it assists the domain-expert in better decision making. We mentioned this in the introduction, but will make it more clear.\n\n\n3. _(Q2) In equation 11 and 12, is it not the case that the lower bound is always equal to the lowerbound of the rest + I_n and the upper bound is equal to the upperbound of the rest + I_n? I do not understand why you wrote it with min/max?_\n\nThe min/max is necessary because we are calculating the lower and upper bounds for the absolute value of fairness. Consider a lower bound of the rest -8 and an upper bound +2, and a fairness value of the current node +1, then the final fairness value will be in the range [-7,+3]. The minimum absolute value in this range is 0 (according to Eq. 11), and its maximum is 7 (according to Eq. 12). \n\nTo make this more clear we explained these equations more clearly in the text.\n\n\n4. _(Q3) It is not very easy to understand equations 14-17. What is the function U exactly? How exactly does the merge function work? if it works the same as defined in Eq.9, then how is it using the upper/lower bound information? If it is not using that information, why is it input to the function?_\n\nThe function U returns the best known bounds for a subtree. This function returns either:\n* The best known bounds based on cached solutions (e.g., when the other subtree has already been examined).\n* Bounds based on dataset inspection. E.g., what would the worst fairness value be if all instances from one group would receive a positive label, and all instances from the other group would receive a negative label (or the other way around).\n\nThe merge function indeed does not use the upper/lower bound information. In Eq. 15 is passed to $T_F$, not to the merge function.\n\nBased on your feedback we have made this more clear in the method section.\n\n\n\n5. _Maybe they could also talk about space complexity and how much storage they used to store the intermediate solutions, since they use a dp approach._\n\nA worst case bound for the space complexity is given by the number of possible distinct trees, since we store all subproblem solutions in the cache. This number of distinct trees is precisely expressed by Hu et al. [23]. A simple loose upper bound is $O(f^{2^d-1})$. \n\nIn our experiments, however, we have never reached space limitations.\n\nWe will add a discussion of runtime and space complexity in the appendix.\n\n\n\n6. _They could also show the performance considering more fairness criteria rather than just demographic parity._\n\nOur contribution indeed enables the consideration of other global non-monotonic constraints, when upper and lower bounds can be expressed. This means that our method is applicable to a broader spectrum of constraints. We mention the extension to other notions of fairness in our future work. Specifically, other group-based notions of fairness, such as equality of opportunity, can be considered by the method we propose.\n\n\n7. _Their approach is applicable to binary classification with binary features. It would be nice to mention what happens if the features are not binary and whether the algorithm could scale._\n\nOur approach would also work for multi-valued decision trees, but the branching factor would increase. In our work, we assume that non-binary features are binarized first, which is a common assumption in other optimal fair works, e.g., [25]. \n\nFinally we want to thank the reviewer again for reading and reviewing our paper and for the helpful comments.\n\nThe authors", " Dear reviewer,\n\nThank you for your review of our work. Based on your comments we have been able to improve the clarity of our writing and explain better how this work contributes to the literature. We will here provide some detailed responses.\n\n\n1. _(i.a) [Novelty.] (...)My (possibly wrong) impression is that the authors essentially implemented the ideas from two existing works (references [14] and [15] of the paper) for this paper._\n\nThe non-trivial novelty in our work is that our method can optimize a non-monotonic global constraint. In contrast, previous work [14,15] requires monotonic objectives. Further note that the bounds are only one of our contributions, we also introduced several algorithmic techniques to provide notable speed ups (Table 2), and a study of scalability (Figure 2). Overall our approach provides orders of magnitude improvements over previous approaches.\n\nThis is currently explained in the introduction and conclusion, and we will explain more clearly in our method section why solving subproblems with a shared constraint is not trivial.\n\n\n2. _(i.b) [The] authors could expand on other constraint classes that could leverage the methodology, or whether there are other fairness definitions that could also be accommodated by the technique._\n\nOur contribution indeed enables the consideration of other global non-monotonic constraints, when upper and lower bounds can be expressed. This means that our method is applicable to a broader spectrum of constraints. We mention the extension to other notions of fairness in our future work. Specifically, other group-based notions of fairness, such as equality of opportunity, can be considered by the method we propose.\n\n\n3. _(ii.a) [Scalability and Usage.] It is impressive that the authors are enumerating the full Pareto frontier (...) how large are the problems that actually can be solved by DP in this context?_\n\nThank you for this comment on the impressiveness of enumerating the whole Pareto frontier. We respond to your questions as part of our response to your question 2 (Q2) below in point 6.\n\n\n4. _(ii.b) (...) what would be the guidelines/insights to pick the appropriate tree among such a possibly large set? _\n\nWe believe the responsible domain-expert is the only person that can decide on the trade-off between accuracy and fairness, since the specific application and context need to be considered. Our method now makes it possible to enumerate the Pareto front to assist in better decision making. We mentioned this in the introduction, but will make it more clear.\n\n\n5. _(Q1) Given the nice performance, is it possible to extend the approach to non-binary trees?_\n\nOur approach would also work for multi-valued decision trees, but the branching factor would increase. In our work, we assume that non-binary features are binarized first, which is a common assumption in other optimal fair works, e.g., [25]. Our method could also be extended to multi-labeled decision trees. However, when considering fairness, a binary label is what is typically considered (the preferred outcome vs the non-preferred outcome).\n\n\n6. _(Q2) Could authors discuss the scalability of the approach for datasets beyond the ones presented?_\n\nThe total number of possible distinct trees is the most important factor in its scalability (as also stated in [15]). The worst case runtime complexity of DPF is $O(T n )$, with $n = |D|$ the size of the dataset, and $T$ the number of possible distinct trees. This number is precisely expressed by Hu et al. [23]. A simple upper bound is $T \\leq O(f^{2^d-1})$. Therefore we can state that $O(nf^{2^d-1})$ is a loose bound on the runtime. \n\nIn our experimental runtime analysis in Figure 2 we confirm these findings: dataset size is not very important, and by reducing the number of possible distinct trees, we observe a drop in runtime.\n\nWe will add a discussion of runtime complexity in the appendix, thank you for the suggestion.\n\n\n7. _(Q3) Labelling approaches in multiobjective DP optimization can be significantly improved when using bidirectional search, such as the one presented in [1]. (...) I wonder how this relates to Algorithm 1, or whether that could help improve the methodology?_\n\nThis is an interesting idea. Thank you for suggesting it. It seems that the intuition behind the algorithm presented in [1] is that the search is symmetrical, i.e., source and target nodes could be swapped and the problem would be the same. This, however, is not the case with optimal decision tree search: root and leaf nodes are not symmetrical, and the number of possible leaf nodes is exponential.\n\nOverall, including this idea would require further research into how a bottom-up search in decision tree search would work. We will consider this as future work; thank you for the suggestion.\n\nFinally we want to thank the reviewer again for reading and reviewing our paper and for the helpful comments.\n\nThe authors\n", " Dear reviewer,\n\nThank you for your review of our work. Based on your comments we have been able to improve the clarity of our writing and explain better how this work contributes to the literature. We will here provide some detailed responses. (Your comments are put in italics, and our response follows afterwards.)\n\n\n\n1. _No theorem or propositions to show the advantage of DPF theoretically._\n\n(As we have also answered the review by Reviewer NysV, point 2, Q1)\n\nTheoretical guarantees of optimal trees on out-of-sample accuracy and fairness is an important open question, both in our work and other related work. However empirically it has been shown that optimal trees do provide better results on out-of-sample accuracy [9,14,15]. This is also the case when considering fairness as both [25] and our results (Table 3) show.\n\nSee also our response to your question 3 (Q3) in point 4 below.\n\n\n\n2. _Pseudo code would be nice to have in the main article than in the appendix to help the readers understand the algorithm_\n\nWe agree with the reviewer that moving the pseudo-code from appendix to the main paper would be an improvement. We will do so in the camera-ready version with the extra page.\n\n\n\n3. _(Q1) In some of the other decision tree models, putting samples into different bins help significantly with the split finding and training speed. In this work, when keeping all the non-dominated submodels, is it possible to use this technique?_\n\n _(Q2) If the technique in question 1 is possible, what are some of the conditions that guarantee when splitting at the bin boundaries results in two non-dominated submodels, all other splits within this bin are also non-dominated? Is this somehow related to the monotonic condition?_\n\n\nIn our approach we assume the binarization is performed as a preprocessing step, and we adopt the binarization as done in [30], as also explained in our experimental setup. For this, or any other binarization, our model will provide the optimal Pareto front. \n\nFurthermore, we would like to highlight that our method scales linearly (in the worst case) for the number of samples, reducing the need for putting samples into bins.\n\n\n\n4. _(Q3) Is it possible to come up with theoretical reasonanings on the efficiency of DPF? Can you specify when, or on what type of dataset DPF can perform much better than the related works and when not?_\n\nIntuitively the advantage of our method is that we exploit the structure of decision trees which eliminates symmetries, and enables the reuse of computed subproblems, both due to our dynamic programming formulation, as explained in our introduction section. These points are difficult to include in MIP formulations. Moreover, our method is not much impacted by the size of the dataset (Figure 2), whereas in MIP formulations new binary variables need to be introduced for each dataset instance.\n\nNote that based on our experiments, our method is orders-of-magnitude faster than the competing methods, and we have not found a dataset where other approaches would outperform our method.\n\n\n\n5. _(Q4) DPF is compared to other in-process fairness oriented algorithms in the article, so a natural question to ask is can DPF be compared to or combined with pre-process fairness oriented methods or post-process fairness oriented methods?_\n\nThe focus of our work is on computing optimal trees faster than previous optimal (in-processing) methods, and therefore we highlight the improvements in scalability in our experimental results section. \n\nIn the appendix we compare our method DPF to the in/post-processing method of Kamiran [25] and show the advantage of DPF. \n\nIn response to the reviewer’s question, yes, we have now also compared DPF to the preprocessing method presented in Kamiran and Calders in 2009 [26], which “massages” the training data by changing the labels such as to remove bias from the training data. These results have now been included in Table 3 in the appendix and also clearly show the advantage of DPF over this method.\n\nFinally we want to thank the reviewer again for reading and reviewing our paper and for the helpful comments.\n\nThe authors\n", " Dear reviewer,\n\nThank you for your review of our work. Based on your comments we have been able to improve the clarity of our writing and explain better how this work contributes to the literature. We will here provide some detailed responses. (Your comments are put in italics, and our response follows afterwards.)\n\n\n\n1. _Adding only group fairness constraint and pruning the trees through trivial bounds are straightforward extensions._\n\nThe non-trivial novelty is that our method can optimize a non-monotonic global constraint. In contrast, previous work [14,15] requires monotonic objectives. Further note that the bounds are only one of our contributions, we also introduced several algorithmic techniques to provide notable speed ups (Table 2), and a study of scalability (Figure 2). Overall our approach provides orders of magnitude improvements over previous approaches.\n\nThis is currently mentioned in the introduction and conclusion, and we will explain more clearly in our method section why solving subproblems with a shared constraint is not trivial: it is no longer possible to determine which solution(s) are best, without considering the rest of the tree.\n\n\n\n2. _(Q1) Is there a theoretical guarantee that obtaining the \"global\" optimal solution to the posed problem would lead to higher accuracy and/or fairness for the out-of-sample instances?_\n\nTheoretical guarantees of optimal trees on out-of-sample accuracy and fairness is an important open question, both in our work and other related work. However empirically it has been shown that optimal trees do provide better results on out-of-sample accuracy [9,14,15]. This is also the case when considering fairness as both existing work [25] and our results (Table 3) show.\n\n\n\n3. _(Q2) What is the impact of highly imbalanced datasets on the performance of the proposed algorithm?_\n\nOur current experiments contain at least two datasets that are highly imbalanced (majority class with more than 90% of the instances: KDD census income and Communities & Crime). We will make this more clear by including in the appendix a table that describes per dataset the number of positive and negative instances.\n\nIntuitively, imbalanced datasets are easier since the initial bounds on misclassification score can be more tight.\n\n\n\n4. _(Q3) The authors have reported only the runtimes for DPF and FairOCT. How about the accuracies and fairness scores? In case of multiple optimal solutions, one would think that the out-of-sample performances may differ. Do these methods obtain exactly same solutions?_\n\nOur focus is on computing optimal fair trees faster. Therefore runtime is the main metric for comparison since both our method and FairOCT solve exactly the same problem to optimality. \n\nIn Appendix C Table 3 and Figure 3 we report on the out-of-sample accuracy and fairness of DPF and a heuristic. Based on another reviewer's comment we have added another heuristic. We did not include a comparison with FairOCT because DPF and FairOCT obtain the same optimal solution (or “randomly” select a solution from the same set of optimal solutions). We have made this more clear in Appendix C.\n\n\n\n5. _(Q4) How do the computation times of the proposed method compare against the method of Kamiran? Are the results for Kamiran in Table 3 obtained after tuning the hyperparameters of the method?_\n\nThe method of Kamiran et al. has negligible runtime, but note that it does not provide optimality guarantees. The Kamiran method was hypertuned in the same way as the DPF method (tuned for the number of nodes), as is mentioned in the text in appendix C.\n\n\n\n6. _The following recent paper discusses fairness in optimal trees by formulating MIP. Their results seem promising. It would be great if the authors also review this paper and highlight their differences: Jo, N., Aghaei, S., Benson, J., Gómez, A., & Vayanos, P. (2022). Learning Optimal Fair Classification Trees. arXiv preprint arXiv:2201.09932._\n\n\nThe reviewer pointed out a recent promising paper. We discuss the referenced paper in our work, see FairOCT [25]. Our results show orders-of-magnitude improvements over the referenced paper (see Table 1).\n\nFinally we want to thank the reviewer again for reading and reviewing our paper and for the helpful comments.\n\nThe authors", " The authors studied the optimal and fair decision trees. They proposed a dynamic programming method considering the fairness constraint. Due to the global fairness constraint, the decision tree is not separable. Using upper and lower bounds on the fairness values, the authors decrease the search space and improve the solution time. With the use of upper and lower bounds on fairness, the authors set forth a new dominance relation. The same bounds also lead to a new pruning mechanism. Consequently, the authors obtain a method that finds the Pareto front, and it is faster than the other methods proposed in the literature.\n\nThe paper is mainly based on two papers [14, 15] that authors have also cited. Adding only group fairness constraint and pruning the trees through trivial bounds are straightforward extensions. In its current form, the work does not introduce enough novelty to the literature.\n Is there a theoretical guarantee that obtaining the \"global\" optimal solution to the posed problem would lead to higher accuracy and/or fairness for the out-of-sample instances?\n\nWhat is the impact of highly imbalanced datasets on the performance of the proposed algorithm?\n\nThe authors have reported only the runtimes for DPF and FairOCT. How about the accuracies and fairness scores? In case of multiple optimal solutions, one would think that the out-of-sample performances may differ. Do these methods obtain exactly same solutions? \n\nHow do the computation times of the proposed method compare against the method of Kamiran? Are the results for Kamiran in Table 3 obtained after tuning the hyperparameters of the method?\n\nThe following recent paper discusses fairness in optimal trees by formulating MIP. Their results seem promising. It would be great if the authors also review this paper and highlight their differences:\n\nJo, N., Aghaei, S., Benson, J., Gómez, A., & Vayanos, P. (2022). Learning Optimal Fair Classification Trees. arXiv preprint arXiv:2201.09932.\n\n Limitations are discussed in the conclusion section.", " The authors designed a specialized algorithm, DPF(Dynamic Programming Fair?) that can find fair optimal decision trees, i.e., the decision trees with the best performance metrics under a given fairness gap constraint. The algorithm can also return a Pareto front of performance and fairness and use the front to filter out dominated splits at each level. Dynamic programming is used in DPF to find possible splits. The authors show numerical results that DPF is much more efficient to train compared to related works like FairOCT. Strengths:\n1. Strong numerical results showing that DPF is much more efficient to train compared to related works like FairOCT\n2. Clear Problem statement\n\nWeaknesses:\n1. No theorem or propositions to show the advantage of DPF theoretically\n2. Pseudo code would be nice to have in the main article than in the appendix to help the readers understand the algorithm 1. In some of the other decision tree models, putting samples into different bins help significantly with the split finding and training speed. In this work, when keeping all the non-dominated submodels, is it possible to use this technique? \n2. If the technique in question 1 is possible, what are some of the conditions that guarantee when splitting at the bin boundaries results in two non-dominated submodels, all other splits within this bin are also non-dominated? Is this somehow related to the monotonic condition? \n3. Is it possible to come up with theoretical reasonanings on the efficiency of DPF? Can you specify when, or on what type of dataset DPF can perform much better than the related works and when not?\n4. DPF is compared to other in-process fairness oriented algorithms in the article, so a natural question to ask is can DPF be compared to or combined with pre-process fairness oriented methods or post-process fairness oriented methods? As discussed in the weaknesses already.", " This paper proposes a dynamic programming (DP) model to construct optimal decision trees subject to fairness constraints, i.e., where the positive classification observes demographic parity. The methodology extends an existing bi-objective recursive reformulation for decision trees and returns the full Pareto frontier in terms of the optimal and fairness criteria. In particular, the authors develop upper and lower bounds on the fairness constraint to ensure all generated solutions are feasible, in addition to pruning techniques to speed up the frontier enumeration. The numerical study evaluates the approach against current (mostly MIP-based) state-of-the-art methods. Strengths\n- Very strong numerical results, significantly outperforming the state of the art\n- Idea is intuitive and easy to implement\n- Paper very well written\n\nWeaknesses\n- I have concerns about the contribution; except for the lower or upper bounds, the methodology itself seems to be derived in a somewhat straightforward way from existing works\n- Scalability and usage could be detailed more thoroughly\n\nMajor comments\n\nOverall, I greatly enjoyed reading the work because the methodology is intuitive, involves interesting bounding procedures associated with the structure of the fairness constraint, and the numerical results are quite strong, outperforming MIP-based models very prominently. However, I have two major concerns after reading the work.\n\n(i) [Novelty.] My (possibly wrong) impression is that the authors essentially implemented the ideas from two existing works (references [14] and [15] of the paper) for this paper. More precisely, they specialized the bi-objective approach to accommodate fairness constraints by considering interval-based labelling to ensure the full Pareto frontier was enumerated correctly. While the effectiveness is apparent and there is some novelty in the upper/lower bounds, this brought me some concerns about whether the results are somewhat incremental to existing literature. I believe this could be addressed in multiple ways. For instance, authors could expand on other constraint classes that could leverage the methodology, or whether there are other fairness definitions that could also be accommodated by the technique.\n\n(ii) [Scalability and Usage.] It is impressive that the authors are enumerating the full Pareto frontier. This shines light into two usual questions that also follow any type of multiobjective work. First, it is unclear how scalable that is; while it worked well for the datasets in the numerical experiments, how large are the problems that actually can be solved by DP in this context? Second, even if the Pareto frontier can be fully enumerated, what would be the guidelines/insights to pick the appropriate tree among such a possibly large set? It would be great if authors could expand further on those concepts.\n\n\n\n 1. Given the nice performance, is it possible to extend the approach to non-binary trees? It would not be surprising to find cases where the DP would perform well (if the state space is still relatively \"compact\").\n\n2. Could authors discuss the scalability of the approach for datasets beyond the ones presented?\n\n3. Labelling approaches in multiobjective DP optimization can be significantly improved when using bidirectional search, such as the one presented in [1]. That is, the state-space graph of the DP is constructed from top-down and bottom-up separately, and the Pareto frontier is \"merged\" when the two layers meet. This is the key performance in state-of-the-art multiobjective models, such as the ones appearing in shortest-path subproblems for vehicle routing. I wonder how this relates to Algorithm 1, or whether that could help improve the methodology?\n\n[1] Galand L, Ismaili A, Perny P, Spanjaard O (2013) Bidirectional preference-based search for state space graph problems. Proceedings of the Sixth International Symposium on Combinatorial Search N/A", " The paper address the problem of finding fair and accurate decision trees. Using demographic parity as the fairness definition, they try to minimize misclassification while at the same time satisfying a fairness constraint. The problem is hence formulated as a multiobjective problem in which both misclassification error and group imbalance are minimized. Having defined a dominating solution as one which is similar or better in both objectives, they search for Pareto fronts of nondominated solutions.\n\nPrevious work have proposed dynamic programming approaches for biobjective optimization in decision trees, and finding the Pareto front of nondominated solutions in the case where the objectives are additive and monotonic. Since the imbalance part of their objective is not monotonic (contains absolute value), the dynamic programming approach cannot be directly used. As a result, the authors propose a way to calculate the upper and lower bound of the imbalance value, and later on, redefine the dominance relation based on these values. They then propose a dp approach that finds the Pareto frontiers and improve the time complexity of the search by pruning the search space based on the upper and lower bound values of the imbalance. Strengths\n- very well written except for some typos mentioned below\n- they consider a well-motivated problem\n\nWeaknesses\n\n- From section 4 on, the paper does not flow very well. In the end of section 3, it gets clear that due to non-monotonicity of the imbalance function, the dp approach cannot be directly used and the reader is waiting for that problem to be addresses. In the beginning of section 4 you directly mention that you present DPF, without saying what it stands for, and without saying how it tries to overcome the monotonicity problem. Section 4 is hard to follow. Not everything is clearly defined such as the meaning of (R), being the rest of the tree, and how exactly you calculate the upper/lower bounds. I believe the quality of the upper/lower bounds directly affect the quality of the solutions found so I do not find the sentence \"These bounds can be trivial, ...\" in line 202 really convincing. \n\nThere are further presentation issues mentioned below:\n- punctuations before/after equations are completely missing\n- typo in the first line of Eq.8\n- Eq 8 needs some explanation. You copy it from another paper without introducing what it is searching for. For example, it is not clear what (|D|,0) is showing. I had to refer to the cited papers to understand but I think it is better if the paper is self-contained and the reader does not need to search for the meaning of notation elsewhere.\n- lines 176-178 are not well written\n- line 191, \"is\" is missing\n- you never mention what is \"DPF\". You use the abbreviation from the first occurrence. I would like to thank the authors for taking the time to write the paper in a well-organized way. It was my pleasure to review the paper. I have a few questions from the authors: \n\n- How good are the set of Pareto frontiers? Can it be that a solution that is not dominated by any other solution is not really a good one? Assume the accuracy is very high but it is at the cost of very low fairness (in the tuple (M,I), M is very high and I is very low. Then it could happen that no other solution can dominate this one because no solution can beat M, but the solution is very imbalanced. \n\n- In equation 11 and 12, is it not the case that the lower bound is always equal to the lowerbound of the rest + I_n and the upper bound is equal to the upperbound of the rest + I_n? I do not understand why you wrote it with min/max?\n\n- it is not very easy to understand equations 14-17. What is the function U exactly? How exactly does the merge function work? if it works the same as defined in Eq.9, then how is it using the upper/lower bound information? If it is not using that information, why is it input to the function?\n they have provided the run time of their approach so it gets clear that it gets higher as the dataset or the number of features grows. Maybe they could also talk about space complexity and how much storage they used to store the intermediate solutions, since they use a dp approach. They could also show the performance considering more fairness criteria rather than just demographic parity. Their approach is applicable to binary classification with binary features. It would be nice to mention what happens if the features are not binary and whether the algorithm could scale." ]
[ -1, -1, -1, -1, -1, 4, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, 4, 1, 4, 3 ]
[ "WWKVq1GSRO1", "d47B1ZlR4Vx", "MGhYree5Sn0", "mnCW8j86YEl", "Rpqwfz76Hs", "nips_2022_LCIZmSw1DuE", "nips_2022_LCIZmSw1DuE", "nips_2022_LCIZmSw1DuE", "nips_2022_LCIZmSw1DuE" ]
nips_2022_v6CqBssIwYw
Instance-Based Uncertainty Estimation for Gradient-Boosted Regression Trees
Gradient-boosted regression trees (GBRTs) are hugely popular for solving tabular regression problems, but provide no estimate of uncertainty. We propose Instance-Based Uncertainty estimation for Gradient-boosted regression trees (IBUG), a simple method for extending any GBRT point predictor to produce probabilistic predictions. IBUG computes a non-parametric distribution around a prediction using the $k$-nearest training instances, where distance is measured with a tree-ensemble kernel. The runtime of IBUG depends on the number of training examples at each leaf in the ensemble, and can be improved by sampling trees or training instances. Empirically, we find that IBUG achieves similar or better performance than the previous state-of-the-art across 22 benchmark regression datasets. We also find that IBUG can achieve improved probabilistic performance by using different base GBRT models, and can more flexibly model the posterior distribution of a prediction than competing methods. We also find that previous methods suffer from poor probabilistic calibration on some datasets, which can be mitigated using a scalar factor tuned on the validation data. Source code is available at https://github.com/jjbrophy47/ibug.
Accept
This paper presents a method for extending any GBRT point predictor to produce probabilistic predictions such that the aleatoric uncertainty can be quantified. It computes a nonparametric distribution around a prediction using the k NNs where the distance is measured by a kernel that is similar to the random forest kernel. The paper is well written and easy to read. All of reviewers agree that it is a simple practical method that is well engineered. But all the techniques used in this system are existing ones, so that its technical novelty is limited. During the discussion period, I had more than a few communications with reviewers. One one hand, there were some concerns on the limited novelty, which I also agree with. In fact, this concern became more notable in the discussion. On the other hand, a strength is in its simplicity, practicability, and its excellence in engineering and design, how to critically evaluate alternative approaches, and how to design experiments that evaluate those approaches”. A few things that I would like the authors to consider in their future submissions include: (1) the method is applied to quantify only aleatoric uncertainty, which should be clearly mentioned in an earlier place in the paper, since these days we observe a few interesting methods for quantifying the predictive uncertainty (that is both aleatoric and episdemic uncertainty); (2) a kernel which is similar to the random forest kernel is used as a distance metric. Unlike RF, GBRT construct trees with small depth, so that it is expected that many instances fall in the same leaf. The behavior might be different from the case of RF. Despite a concern on the limited novelty, most of reviewers feel that this work can be accepted, so I recommend it for acceptance.
train
[ "L84zZlAOC4Q", "K8-rdNSRbpa", "Dkf6p7JXv2n", "nn4t0KOI3_Y", "cu0GoVA3FiZ", "KFCeV2jnaym", "1nmah7Yz5my8", "3M6NeFB0utd", "ZoikA6jlqyh", "uOc7hCh8Htg", "Wy1ll5z7zY-", "0XlCCi70BkI", "ZwOOsyQoTEN", "H5pQgwokEY" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarifications and additional experimental results. I increased my score.", " Thanks for the response.", " We thank the reviewer for their thoughtful feedback and appreciate their recognition of the engineering effort put into IBUG.\n\n**Q: Comparison to Davies and Ghahramani (2014)?**\n\n**A:** Yes, the affinity computation is indeed similar to Davies and Ghahramani (2014); our paths then diverge in which we focus on probabilisitic predictions in GBRTs and flexibly modeling the output. We thank the reviewer for bringing this work to our attention, and we will absolutely discuss and cite their work in the main text.\n\n*Davies, Alex, and Zoubin Ghahramani. \"The random forest kernel and other kernels for big data from random partitions.\" arXiv preprint arXiv:1402.4293 (2014).*\n\n**Q: Comparison to BART?**\n\n**A:** We compare IBUG to BART (using the implementation provided at https://github.com/JakeColtman/bartpy) on a subset of the datasets consisting of 13 smaller datasets (we exceeded our computational resources when attempting to run BART on the larger datasets). We tune the number of trees in BART using values [10, 50, 100, 200]; for a fair comparison, we tune the base model for IBUG using the same values. The results are shown below, and we observe that IBUG consistently outperforms BART in terms of point performance and CRPS. IBUG and BART performed similarly in terms of mean absolute calibration error (MACE); however, IBUG had more cases than BART in which the MACE *and* sharpness scores were both low.\n\n### RMSE\n|Dataset|BART|IBUG|\n| --- | --- | --- |\n|Bike| 8.159| **2.557**|\n|California| 0.536| **0.441**|\n|Communities| **0.134**| 0.134|\n|Concrete| 5.511| **3.917**|\n|Energy| 0.671| **0.322**|\n|Kin8nm| 0.187| **0.108**|\n|Life| 2.678| **1.710**|\n|Naval| 0.002| **0.001**|\n|Power| 4.071| **2.970**|\n|Protein| 4.743| **3.618**|\n|STAR| 234.977| **230.111**|\n|Wine| 0.711| **0.606**|\n|Yacht| 1.447| **0.895**|\n|*IBUG W-L*| 12-1| -|\n\n### CRPS\n|Dataset|BART|IBUG|\n| --- | --- | --- |\n|Bike| 4.365| **0.959**|\n|California| 0.279| **0.211**|\n|Communities| 0.070| **0.065**|\n|Concrete| 3.088| **2.075**|\n|Energy| 0.384| **0.170**|\n|Kin8nm| 0.107| **0.062**|\n|Life| 1.424| **0.792**|\n|Naval| 0.001| **0.000**|\n|Power| 2.238| **1.533**|\n|Protein| 2.712| **1.808**|\n|STAR| 134.22| **130.24**|\n|Wine| 0.396| **0.324**|\n|Yacht| 0.754| **0.308**|\n|*IBUG W-L*| 13-0| -|\n\n### MACE/Sharpness\n|Dataset|BART|IBUG|\n| --- | --- | --- |\n|Bike| 0.083/7.334| **0.048**/**2.206**|\n|California| **0.019**/**0.416**| 0.020/0.436|\n|Communities| 0.046/**0.107**| **0.028**/0.129|\n|Concrete| **0.075**/5.530| 0.118/**2.954**|\n|Energy| **0.088**/0.860| 0.163/**0.343**|\n|Kin8nm| 0.081/0.149| **0.055**/**0.143**|\n|Life| 0.054/2.641| **0.038**/**1.599**|\n|Naval| **0.044**/0.002| 0.147/**0.000**|\n|Power| 0.041/3.451| **0.020**/**3.037**|\n|Protein| 0.062/5.925| **0.012**/**3.813**|\n|STAR| 0.045/276.18| **0.022**/**235.51**|\n|Wine| **0.040**/**0.608**| 0.072/0.608|\n|Yacht| **0.118**/1.311| 0.143/**0.818**|\n|*IBUG W-L*| 7-6/10-3| -/-|\n**Performance claims.**\n\n**A:** In general, we describe IBUG as performing similarly or better than existing approaches while offering extra flexibility in terms of model agnosticism and posterior modeling. However, we will make sure our claims convey a notion of *competitive* performance in comparison with existing methods.\n\n**Q: Sensitivity of $k$?**\n\n**A:** We find choosing an appropriate value of $k$ is crucial to achieving good probabilistic performance. However, as our experiments demonstrate, appropriate tuning can lead to effective values of $k$ for a wide range of datasets.\n\n**Typos.**\n\n**A:** We thank the reviewer for spotting this typo.", " We are encouraged you find our approach interesting, and we aim to address your major concerns here.\n\n**Q: Comparison to KNN with most important features?**\n\n**A:** The affinity computation in IBUG is an example of a supervised kernel based on the learnt structure of the tree ensemble and we thus expect it to outperform similarity measures based on Euclidean distance. However, we have added an additional comparison to a KNN model that operates on a reduced set of the most important features. First, we apply standard scaling and train a GBRT model to obtain the most important features, then we filter the dataset down to the $\\upsilon$ most important features before applying KNN. We treat $\\upsilon$ as a hyperparameter and tune $\\upsilon$ using values [5, 10, 20]; we denote this method KNN-FI. To estimate the mean and variance, we use the GBRT prediction as the conditional mean, and use KNN-FI to identify the $k$-nearest neighbors (in the reduced feature space) to estimate the variance for each prediction.\n\nResults are shown below, and we observe KNN-FI is able to achieve better point and probabilistic performance than standard KNN. However, IBUG generally outperforms KNN-FI in terms of CRPS and MACE (mean absolute calibration error)/sharpness, demonstrating the effectiveness of a supervised tree-based kernel over a similarity measure like Euclidean distance. We note that KNN achieves good MACE scores but poor sharpness scores, meaning the variance of the KNN predictions are generally too wide. We thank the reviewer for their suggestion and will add this comparison into the paper.\n\n### RMSE\n|Dataset|KNN|KNN-FI|IBUG|\n| --- | --- | --- | --- |\n|Ames| 35311| **22229**| 22349|\n|Bike| 35.8| 2.903| **2.379**|\n|California| 0.624| 0.436| **0.436**|\n|Communities| 0.142| **0.132**| 0.132|\n|Concrete| 8.433| 3.776| **3.751**|\n|Energy| 3.663| **0.305**| 0.306|\n|Facebook| 30.61| 20.43| **20.38**|\n|Kin8nm| 0.117| **0.102**| 0.103|\n|Life| 2.048| 1.679| **1.678**|\n|MEPS| 24.78| 24.12| **23.85**|\n|MSD| 10.174| **8.778**| 8.780|\n|Naval| 0.002| **0.000**| 0.000|\n|News| **11027**| 11053| 11051|\n|Obesity| 4.276| 0.165| **0.160**|\n|Power| 3.735| 2.965| **2.950**|\n|Protein| 3.811| 3.423| **3.420**|\n|STAR| 240.34| 230.47| **230.31**|\n|Superconductor| 5.321| 0.374| **0.357**|\n|Synthetic| 10.891| **10.197**| 10.209|\n|Wave| 41223| 7615| **7493**|\n|Wine| 0.696| 0.603| **0.601**|\n|Yacht| 8.600| **0.890**| 0.899|\n|*KNN-FI W-L*| 21-1| -| 8-14|\n|*IBUG W-L*| 21-1| 14-8| -|\n\n### CRPS\n|Dataset|KNN|KNN-FI|IBUG|\n| --- | --- | --- | --- |\n|Ames| 16951| 10793| **10448**|\n|Bike| 17.330| 0.966| **0.829**|\n|California| 0.310| 0.221| **0.210**|\n|Communities| 0.070| 0.065| **0.064**|\n|Concrete| 4.491| 1.960| **1.948**|\n|Energy| 1.911| 0.161| **0.155**|\n|Facebook| 5.064| 3.289| **3.059**|\n|Kin8nm| 0.067| **0.060**| 0.060|\n|Life| 0.902| 0.799| **0.782**|\n|MEPS| **5.900**| 6.362| 6.342|\n|MSD| 5.317| 4.562| **4.369**|\n|Naval| 0.001| 0.000| **0.000**|\n|News| 2473| 2620| **2431**|\n|Obesity| 2.256| 0.063| **0.058**|\n|Power| 1.958| 1.571| **1.528**|\n|Protein| 1.724| 1.699| **1.692**|\n|STAR| 136.49| 130.85| **130.29**|\n|Superconductor| 1.929| 0.110| **0.087**|\n|Synthetic| 6.147| **5.767**| 5.771|\n|Wave| 19320| 4312| **4264**|\n|Wine| 0.382| 0.324| **0.320**|\n|Yacht| 3.485| 0.304| **0.300**|\n|*KNN-FI W-L*| 20-2| -| 2-20|\n|*IBUG W-L*| 21-1| 20-2| -|\n\n### MACE/Sharpness\n|Dataset|KNN|KNN-FI|IBUG|\n| --- | --- | --- | --- |\n|Ames| **0.030**/34350| 0.090/**22243**| 0.049/24818|\n|Bike| **0.018**/42.894| 0.086/**1.198**| 0.069/1.924|\n|California| **0.016**/0.603| 0.089/**0.402**| 0.037/0.459|\n|Communities| 0.040/0.144| 0.051/0.142| **0.027**/**0.129**|\n|Concrete| **0.031**/9.093| 0.061/3.617| 0.081/**3.062**|\n|Energy| **0.050**/4.071| 0.189/0.382| 0.132/**0.254**|\n|Facebook| 0.115/**15.854**| 0.116/18.123| **0.080**/17.056|\n|Kin8nm| **0.056**/0.154| 0.104/**0.098**| 0.087/0.116|\n|Life| **0.032**/1.796| 0.073/**1.528**| 0.063/1.596|\n|MEPS| 0.065/**11.477**| 0.068/15.933| **0.064**/15.907|\n|MSD| 0.028/10.151| 0.083/9.940| **0.009**/**7.898**|\n|Naval| 0.126/0.002| 0.058/0.001| **0.043**/**0.000**|\n|News| 0.196/5180| 0.206/5167| **0.124**/**3431**|\n|Obesity| **0.022**/4.962| 0.115/0.179| 0.109/**0.111**|\n|Power| **0.015**/3.815| 0.041/3.429| 0.025/**3.148**|\n|Protein| **0.030**/**3.339**| 0.042/3.839| 0.038/3.751|\n|STAR| 0.027/247.67| 0.025/244.83| **0.021**/**235.55**|\n|Superconductor| **0.049**/5.286| 0.140/**0.129**| 0.079/0.250|\n|Synthetic| **0.010**/11.066| 0.013/10.497| 0.010/**10.301**|\n|Wave| **0.011**/42887| 0.191/6463| 0.206/**6372**|\n|Wine| **0.030**/0.702| 0.097/0.649| 0.093/**0.608**|\n|Yacht| 0.127/7.332| 0.123/**0.642**| **0.107**/0.795|\n|*KNN-FI W-L*| 3-19/19-3| -/-| 2-20/7-15|\n|*IBUG W-L*| 8-14/19-3| 20-2/15-7| -/-|", " **Variance calibration suggests biased uncertainty estimates.**\n\n**A:** We agree that applying variance calibration can help correct for variance estimates that are systematically too large or small. However, variance calibration is not just beneficial for IBUG, but virtually all methods we compare to, with variance calibration helping some methods substantially more than IBUG (e.g., PGBM, also see response to *gTo6*); thus, we view variance calibration as a simple post-processing step that can significantly improve probabilisitic performance for any uncertainty estimator.\n\n**Significance testing.**\n\n**A:** Our protocol of 20 different 90/10 train-test random folds is based on previous work [Duan et al. 2020, Sprangers et al. 2021]. However, we agree with the reviewer that this form of significance testing is somewhat biased and may result in overly optimistic results. The results shown here show wins/losses based on the mean score over the 20 random folds.\n\n**NLL and CRPS are hard to interpret.**\n\n**A:** Negative-log likelihood is a very common metric to use when evaluating probabilisitic predictions in the form of density functions, and CRPS is a popular proper scoring rule that generalizes mean absolute error to probabilistic predictions [Gneiting and Raftery 2007]. However, we also evaluate each method using the *check score* (a.k.a. pinball loss) and interval score (evaluation using a pair of quantiles with expected coverage) using the *Uncertainty Toolbox* [Chung et al. 2021] below. Under these additional metrics, IBUG still performs similarly or better than existing approaches.\n\n### Check Score (lower is better)\n|Dataset|KNN|KNN-FI|NGBoost|PGBM|IBUG|\n| --- | --- | --- | --- | --- | --- |\n|Ames| 8557| 5444| 19002| 5370| **5274**|\n|Bike| 8.749| 0.486| 6.301| 0.583| **0.418**|\n|California| 0.156| 0.111| 0.129| 0.110| **0.106**|\n|Communities| 0.035| 0.033| 0.033| 0.034| **0.032**|\n|Concrete| 2.268| 0.988| 1.649| **0.956**| 0.982|\n|Energy| 0.965| 0.081| 0.267| 0.082| **0.078**|\n|Facebook| 2.548| 1.658| 2.080| 1.824| **1.543**|\n|Kin8nm| 0.034| **0.030**| 0.048| 0.036| 0.030|\n|Life| 0.455| 0.403| 0.711| 0.413| **0.395**|\n|MEPS| 2.969| 3.207| **2.791**| 3.145| 3.197|\n|MSD| 2.684| 2.303| 2.282| 2.309| **2.206**|\n|Naval| 0.000| 0.000| 0.002| 0.000| **0.000**|\n|News| 1243| 1318| **1081**| 1171| 1223|\n|Obesity| 1.139| 0.032| 2e17| 0.057| **0.029**|\n|Power| 0.988| 0.793| 1.066| 0.774| **0.771**|\n|Protein| 0.869| 0.857| 1.349| 0.935| **0.854**|\n|STAR| 68.924| 66.076| 66.401| 66.149| **65.795**|\n|Superconductor| 0.973| 0.055| 1.227| 0.060| **0.044**|\n|Synthetic| 3.104| **2.912**| 2.948| 2.916| 2.914|\n|Wave| 9754| 2172| 288225| **1974**| 2148|\n|Wine| 0.193| 0.164| 0.195| 0.163| **0.162**|\n|Yacht| 1.758| 0.153| 0.488| **0.121**| 0.152|\n|*IBUG W-L*| 21-1| 20-2| 20-2| 17-5| -|\n\n### Interval Score (lower is better)\n|Dataset|KNN|KNN-FI|NGBoost|PGBM|IBUG|\n| --- | --- | --- | --- | --- | --- |\n|Ames| 88595| 63262| 194678| 58820| **56298**|\n|Bike| 89.940| 6.840| 65.692| 6.811| **4.960**|\n|California| 1.645| 1.312| 1.357| 1.238| **1.135**|\n|Communities| 0.368| 0.347| 0.351| 0.363| **0.342**|\n|Concrete| 21.932| 11.994| 16.636| **10.648**| 12.152|\n|Energy| 9.419| 1.159| 2.677| **0.957**| 1.048|\n|Facebook| 36.181| 20.641| 29.403| 27.669| **17.447**|\n|Kin8nm| 0.343| 0.327| 0.457| 0.377| **0.320**|\n|Life| 5.213| 4.976| 7.465| 5.138| **4.752**|\n|MEPS| 41.763| **39.778**| 42.406| 41.090| 39.868|\n|MSD| 27.345| 23.977| 24.483| 24.784| **22.338**|\n|Naval| 0.005| **0.001**| 0.014| 0.003| 0.001|\n|News| 18901| 19680| 15985| **15886**| 16906|\n|Obesity| 11.690| 0.398| 2e18| 0.725| **0.351**|\n|Power| 10.214| 8.697| 10.629| **8.105**| 8.251|\n|Protein| 9.619| **9.115**| 13.398| 10.226| 9.136|\n|STAR| 662| 638| 642| 640| **632**|\n|Superconductor| 10.845| 0.861| 12.103| 0.770| **0.515**|\n|Synthetic| 30.103| 28.226| 28.717| 28.259| **28.211**|\n|Wave| 97237| 28368| 2714251| **20332**| 28290|\n|Wine| 1.929| 1.736| 1.938| 1.739| **1.724**|\n|Yacht| 20.211| 1.885| 5.040| **1.312**| 1.664|\n|*IBUG W-L*| 22-0| 18-4| 21-1| 16-6| -|*\n\n\n*Gneiting, Tilmann, and Adrian E. Raftery. \"Strictly proper scoring rules, prediction, and estimation.\" Journal of the American statistical Association 102.477 (2007): 359-378.*\n\n*Chung, Youngseog, et al. \"Uncertainty toolbox: an open-source library for assessing, visualizing, and improving uncertainty quantification.\" arXiv preprint arXiv:2109.10254 (2021). URL: https://uncertainty-toolbox.github.io/.*", " We thank the reviewer for their comments and suggestions, here we address major concerns.\n\n**Q: Comparison to CatBoost with loss function \"RMSEWithUncertainty\"?**\n\n**A:** Per the reviewer's suggestion, we compare IBUG-CB (IBUG-CatBoost) to CBU (CatBoost with loss function \"RMSEWithUncertainty\". The results are shown below (we also include NGBoost and PGBM for additional context); we observe IBUG and CBU achieve similar point and probabilisitic performance, and generally outperform both NGBoost and PGBM.\n\nThese results use variance calibration (Sec. 3.2) which we note has a significant beneficial impact on CBU, especially for the MACE (mean absolute calibration error) results where the median MACE score improved by 3.2x when using variance calibration vs. without.\n\nWe also note CBU can only model univariate gaussians or other distributions with location and scale, but IBUG can flexibly model any parametric distribution *and* nonparametric density estimators. To demonstrate this flexibility, we compute the NLL on the test sets of the MEPS and Wine datasets from Sec. 5.4 using CBU and IBUG-CB. On the MEPS dataset, CBU and IBUG-CB achieve an NLL of 3.781 +/- 0.040 and **-6.898 +/- 0.117**, respectively; on the Wine dataset, CBU and IBUG-CB achieve an NLL of 1.003 +/- 0.008 and **0.812 +/- 0.016**, respectively. In both cases, CBU modeled the output as a normal distribution while IBUG modeled the output as a Weibull distribution and using KDE (kernel density estimation) for the MEPS and Wine datasets, respectively.\n\nSince CBU and IBUG have similar performance, we experiment combining CBU and IBUG into an ensemble approach, in which their predictions (after variance calibration) are averaged. In the results below, we observe this hybrid approach performs surprisingly well, typically outperforming all other approaches, especially in terms of CRPS. Thus, when approaching a new dataset, using a combination of CBU and IBUG (potentially with different base models) may achieve the best performance. We thank the reviewer for their suggestion and we will add these results to the paper.\n\n### RMSE\n|Dataset|NGBoost|PGBM|CBU|IBUG-CB|CBU+IBUG-CB|\n| --- | --- | --- | --- | --- | --- |\n|Ames| 71167| 22762| 21315| 21430| **21078**|\n|Bike| 47.165| 3.741| 3.043| 3.087| **2.949**|\n|California| 0.529| 0.440| 0.444| **0.424**| 0.427|\n|Communities| 0.139| 0.134| 0.131| 0.130| **0.130**|\n|Concrete| 6.121| 3.725| 3.511| 3.423| **3.420**|\n|Energy| 1.212| 0.329| 0.391| **0.279**| 0.320|\n|Facebook| 31.12| 20.32| 20.49| 20.38| **20.30**|\n|Kin8nm| 0.177| 0.108| 0.104| **0.086**| 0.091|\n|Life| 2.767| 1.703| 1.668| 1.655| **1.629**|\n|MEPS| 25.37| 23.87| 23.81| 23.81| **23.75**|\n|MSD| 9.290| 8.806| 8.742| 8.749| **8.725**|\n|Naval| 0.006| 0.000| 0.001| **0.000**| 0.000|\n|News| 11050| 11047| 10996| 11001| **10994**|\n|Obesity| 0.570| 0.181| 0.178| 0.167| **0.164**|\n|Power| 3.923| 2.979| 2.901| 2.904| **2.880**|\n|Protein| 4.963| **3.480**| 3.536| 3.532| 3.510|\n|STAR| 232.3| 230.6| 228.3| 228.0| **227.8**|\n|Superconductor| 7.344| **0.412**| 0.493| 0.444| 0.442|\n|Synthetic| 10.25| 10.21| 10.19| 10.19| **10.18**|\n|Wave| 1000905| 7901| 4623| 4635| **3835**|\n|Wine| 0.700| 0.607| 0.629| **0.602**| 0.603|\n|Yacht| 3.417| 0.695| 0.537| **0.465**| 0.471|\n|*IBUG-CB W-L*| 22-0| 19-3| 15-7| -| 6-16|\n|*CBU+IBUG-CB W-L*| 22-0| 20-2| 22-0| 16-6| -|\n\n### CRPS\n|Dataset|NGBoost|PGBM|CBU|IBUG-CB|CBU+IBUG-CB|\n| --- | --- | --- | --- | --- | --- |\n|Ames| 37637| 10641| 10559| 9953| **9742**|\n|Bike| 12.481| 1.155| 0.833| 0.972| **0.776**|\n|California| 0.256| 0.219| 0.214| 0.210| **0.204**|\n|Communities| 0.066| 0.068| 0.065| 0.063| **0.063**|\n|Concrete| 3.266| 1.895| 1.744| 1.736| **1.659**|\n|Energy| 0.528| 0.163| 0.198| **0.142**| 0.158|\n|Facebook| 4.132| 3.626| 3.301| 3.192| **3.109**|\n|Kin8nm| 0.095| 0.071| 0.058| 0.052| **0.051**|\n|Life| 1.408| 0.819| 0.789| 0.801| **0.741**|\n|MEPS| **5.550**| 6.243| 6.140| 6.158| 6.042|\n|MSD| 4.523| 4.575| 4.360| 4.408| **4.345**|\n|Naval| 0.003| 0.000| 0.000| 0.000| **0.000**|\n|News| **2149**| 2328| 2293| 2510| 2324|\n|Obesity| 3e17| 0.113| 0.064| 0.058| **0.054**|\n|Power| 2.112| 1.534| 1.480| 1.535| **1.470**|\n|Protein| 2.672| 1.853| 1.798| 1.795| **1.753**|\n|STAR| 131.50| 131.00| 129.56| 129.47| **129.19**|\n|Superconductor| 2.429| **0.119**| 0.154| 0.151| 0.128|\n|Synthetic| 5.838| 5.774| 5.761| 5.762| **5.759**|\n|Wave| 570788| 3911| 2201| 2490| **1896**|\n|Wine| 0.385| 0.324| 0.338| 0.324| **0.323**|\n|Yacht| 0.967| 0.240| 0.229| 0.191| **0.187**|\n|*IBUG-CB W-L*| 20-2| 18-4| 14-8| -| 1-21|\n|*CBU+IBUG-CB W-L*| 20-2| 21-1| 21-1| 21-1| -|\n\n(results continued in part 2...)", " ### MACE/Sharpness\n|Dataset|NGBoost|PGBM|CBU|IBUG-CB|CBU+IBUG-CB|\n| --- | --- | --- | --- | --- | --- |\n|Ames| 0.076/78290| **0.058**/**17692**| 0.088/20385| 0.090/20709| 0.077/19136|\n|Bike| 0.070/110.832| 0.098/1.884| **0.037**/2.159| 0.108/**1.224**| 0.056/1.581|\n|California| **0.012**/0.481| 0.053/**0.344**| 0.020/0.357| 0.095/0.371| 0.043/0.354|\n|Communities| **0.031**/0.125| 0.080/**0.123**| 0.044/0.125| 0.041/0.134| 0.049/0.128|\n|Concrete| **0.047**/6.308| 0.056/3.046| 0.101/3.288| 0.102/**2.494**| 0.061/2.765|\n|Energy| 0.158/1.152| 0.109/0.292| 0.057/0.365| 0.077/**0.270**| **0.054**/0.301|\n|Facebook| 0.095/9.186| 0.195/**3.830**| 0.100/10.515| **0.063**/18.165| 0.096/13.672|\n|Kin8nm| **0.021**/0.177| 0.049/0.162| 0.024/0.096| 0.135/**0.070**| 0.053/0.080|\n|Life| **0.041**/3.210| 0.064/**1.044**| 0.080/1.259| 0.142/1.276| 0.063/1.182|\n|MEPS| **0.031**/**8.693**| 0.073/10.107| 0.097/11.891| 0.086/17.381| 0.092/14.305|\n|MSD| **0.007**/7.743| 0.037/**7.435**| 0.011/8.097| 0.039/9.086| 0.030/8.491|\n|Naval| **0.033**/0.006| 0.237/0.001| 0.050/0.001| 0.061/**0.000**| 0.080/0.001|\n|News| 0.093/**2444**| 0.088/3284| **0.078**/3395| 0.207/4556| 0.085/3577|\n|Obesity| 0.063/1e20| 0.175/0.462| **0.021**/0.139| 0.102/**0.088**| 0.036/0.107|\n|Power| **0.017**/3.816| 0.033/2.643| 0.020/**2.367**| 0.031/3.320| 0.021/2.761|\n|Protein| 0.029/5.118| 0.088/**3.147**| 0.036/3.148| **0.015**/3.980| 0.044/3.500|\n|STAR| 0.029/247.19| 0.040/248.32| 0.024/**233.28**| 0.026/242.84| **0.023**/237.63|\n|Superconductor| 0.089/8.070| 0.082/**0.183**| 0.026/0.316| 0.166/0.242| **0.025**/0.253|\n|Synthetic| 0.010/14.39| 0.019/10.74| 0.013/**10.28**| **0.009**/10.41| 0.010/10.34|\n|Wave| 0.129/1e6| 0.019/6442| **0.008**/**4149**| 0.072/6581| 0.055/5096|\n|Wine| **0.017**/0.685| 0.083/**0.544**| 0.026/0.579| 0.090/0.643| 0.061/0.602|\n|Yacht| 0.118/4.085| 0.126/0.493| **0.083**/0.450| 0.116/0.479| 0.084/**0.439**|\n|*IBUG-CB W-L*| 7-15/17-5| 10-12/9-13| 5-17/7-15| -/-| 6-16/7-15|\n|*CBU+IBUG-CB W-L*| 8-14/17-5| 17-5/9-13| 9-13/11-11| 16-6/15-7| -/-|\n\n**Q: How are the hyperparameters tuned?**\n\n**A:** You are correct that for NLL in Table 1, the hyperparameters are optimized for NLL (shown in Table 5); for CRPS in Table 1, the selected hyperparameters are shown in Table 6. IBUG is a post-hoc approach applied to a given GBRT point predictor; thus, there are two sets of hyperparameters to tune, those for the point predictor (e.g., LightGBM, XGBoost, etc.), and those for IBUG ($k$, $\\rho$, and $\\gamma$/$\\delta$). The hyperparameters for the point predictor are tuned based on a point-prediction metric such as RMSE, then the IBUG hyperparameters are tuned based on a probabilistic performance metric such as NLL or CRPS. Therefore, the LightGBM base model has the same selected hyperparameters in Tables 5 and 6 since the base model is tuned using RMSE, but IBUG has different selected hyperparameters since it is tuned using NLL and CRPS, respectively.", " **Q: What is the effect of different tree-sampling strategies?**\n\n**A:** We agree the sampling strategy can make a significant difference on the efficiency and efficacy of the probabilisitic predictions in IBUG. Sec. C.3 shows results for three different sampling strategies: *random*, *first-to-last*, and *last-to-first* (we will replace *ascending* with *first-to-last* and *descending* with *last-to-first*).\n\nWe observe that sampling trees *last-to-first* generally requires sampling all trees in order to achieve the lowest NLL on the test set. In contrast, when sampling *first-to-last*, our results provide some evidence to the reviewer's comment that initial trees provide the most significant contributions; this is especially evident for the Naval, Protein, and Wine datasets in which sampling less than 5% of the trees results in the same or better NLL than when sampling all trees. However, on the Bike and Obesity datasets, *random* sampling achieves the lowest NLL with the smallest number of trees sampled (note the sharp \"elbow\" for these datasets in Figure 7a) out of all of the sampling strategies; thus, sampling a mixture of trees earlier and later in training is sometimes most beneficial. Thank you for your thoughtful comments, we will add this discussion to the main text and reference Figure 7 and Sec. C.3.\n\n**Q: Comparison to KNN applied to original features?**\n\n**A:** Yes, Sec. B.3 provides a comparison between IBUG and KNN applied to the original features. Overall, KNN was not competitive with IBUG in both point and probabilistic performance (Tables 7, 8, and 9). Due to space constraints, we omitted this comparison in the main text.\n\nUnfortunately, the results shown in Sec. B.3 does not use feature normalization before applying KNN. However, we have run additional experiments that *do* apply standard scaling before using KNN; the results are shown in the response to reviewer *VfPp*. Overall, we observe the same trends as in Sec B.3. Thank you for your suggestion, we will make sure to add a discussion of these results in the main text.\n\n**Typos.**\n\n**A:** Thank you for spotting these typos.", " Thank you for your comments and suggestions, here we address major concerns.\n\n**Q: What's the point prediction of IBUG? Is it the original GBRT prediction, or is it obtained from the nearest-neighbors set? If it is, then is the prediction as good as the original GBRT prediction?**\n\n**A:** Yes, you are correct that the point prediction of IBUG is the original GBRT prediction (Sec. 3.1). In preliminary experiments, we tested using the *k*-nearest neighbors to model the conditional mean, but we found using the original GBRT prediction achieves better point predictions and subsequently better probabilistic predictions.\n\n**Q: No comparisons beyond NGBoost and PGBM? Can you use a neural network to estimate the variance?**\n\n**A:** NGBoost and PGBM are recent methods that achieve state-of-the-art results on tabular probabilistic regression problems; our approach is conceptually simple, easy to implement, and is generally competitive with these approaches. Using a neural network to estimate the variance adds a great deal of complexity and may require careful tuning of the architecture and hyperparameters, and may not work well for problems with limited data.\n\nAdditional KNN and random baseline results are in Sec. B.3 of the Appendix. However, we also add comparisons to *three* more methods based on other reviewers' feedback. Specifically, we compare against: (1) a KNN method that uses feature importance from the GBRT to reduce the dimensionality and avoid the curse of dimensionality (see response to *VfPp*); (2) BART, a popular but expensive sampling based approach (see response to *TLnP*); and (3) CBU, CatBoost model using the loss function \"RMSEWithUncertainty\" (see response to *gTo6*).\n\n**Q: In probabilistic forecasting, calibration error (with sharpness) is really a popular metric. Can you consider using it in experiments? ECE and sharpness can provide more comprehensive evaluations.**\n\n**A:** Thank you for the suggestion, we have added a combination of MACE (mean absolute calibration error, equivalent to ECE)/sharpness to our probabilistic performance evaluation. We show results for these metrics in our responses to the other reviewers, and provide new analyses and insights into existing state-of-the-art methods.\n\n**Tuning hyperparameters $\\gamma$, $\\delta$, and $k$**.\n\n**A:** Using a validation set is a valid and very common method for tuning hyperparameters. Yes, the optimal value of $k$ does depend on the dataset, so we follow the standard KNN practice of choosing it based on validation data. Variance calibration (tuning $\\gamma$ and $\\delta$) is done for all methods as a final step, after all other hyperparameters are tuned. This calibration substantially improves several baseline methods as well as IBUG.\n\n**Q: The minimum sample size in the leaf nodes is also a hyper-parameter in your model. What are its effects on the affinity calculation and the uncertainty estimation?**\n\n**A:** Minimum sample size in leaf nodes is a hyper-parameter of the GBDT learning algorithms. We tune this hyper-parameter to minimize RMSE, and then apply IBUG, which works with the tree ensemble as-is. In our experiments, we considered the hyper-parameter values [1, 20] (Sec. B.2); \"1\" was selected by 12 datasets and \"20\" by 8 datasets. In terms of efficiency, the affinity computation is faster for leaf nodes with a small number of examples, and is only affected by very large leaves (we observe LightGBM tends to produce large leaves in Sec. B.5). In terms of probabilistic performance, we observe no correlation between minimum leaf size and relative IBUG performance; for datasets where the minimum leaf size is \"1\", IBUG has better mean NLL (CRPS) scores on 9/12 (11/12) and 8/12 (10/12) datasets when compared against NGBoost and PGBM, respectively; when the minimum leaf size is \"20\", IBUG achieves better mean NLL (CRPS) on 10/10 (9/10) and 8/10 (7/10) (Table 1).\n\n**Mathematical expressions.**\n\n**A:** Thank you for spotting the typo in Equation (1). As for Equations (3) and (4), we will revise those sections for better clarity.", " We thank all the reviewers for their valuable comments and suggestions. We are encouraged reviewers find this an interesting and important problem (*E8u6*, *VfPp*) in which our approach is clear, simple, and sound (*gTo6*, *VfPp*, *TLnP*), yet flexible and effective (*gTo6*, *TLnP*) with an extensive empirical evaluation (*VfPp*) and a high degree of engineering (*TLnP*).\n\nDuring this short response period, we have focused first on the additional experiments requested by the reviewers, including additional baselines and metrics. Thank you for the suggestions -- we believe these additional comparisons will make the paper stronger. We will integrate them into a revised version of the paper ASAP. Additional baselines include:\n1. *KNN-FI*, a KNN method that uses feature importance from the GBRT to reduce the dimensionality and avoid the curse of dimensionality (see response to reviewer *VfPp*).\n2. *BART*, a popular but expensive sampling-based approach (see response to reviewer *TLnP*).\n2. *CBU*, a CatBoost model using the loss function “RMSEWithUncertainty” (see response to reviewer *gTo6*).\n\nAdditional metrics include MACE (mean absolute calibration error, equivalent to ECE) and sharpness (lower is better for both). Sharpness quantifies the average of the standard deviations and thus does not depend on the actual ground-truth label; therefore, MACE and sharpness are shown together, with better methods having both low calibration error and low sharpness scores. For all metrics, scores are averaged over the 20 random folds for each dataset. Additional reviewer-specific concerns are shown in response to each reviewer.", " This paper addresses an interesting problem: uncertainty estimation for GBRT. The authors propose a $k$-nearest neighbors approach based on an affinity between the testing sample and training samples. To save computational time, sampling from trees is used. They prove by experiments that the proposed IBUG works better than NGBoost and PGBM, two recent gradient boosting algorithms for tree model uncertainty. Strengths:\nThe idea has some similarities to existing works, such as distance-based conformal prediction. The way to calculate the distance (or the affinity) is new. So this paper shows a new idea, and the authors show the usefulness of the model. Comparisons to PGBM verify the benefits of such a GBRT uncertainty model.\n\nWeaknesses:\nI have some concerns about the technical issues. Please see the list in the question section. 1. What's the point prediction of IBUG? Is it the original GBRT prediction, or the $\\mu_y$ obtained from the $k$-nearest neighbors set? If it is $\\mu_y$, then is the prediction as good as the original GBRT prediction?\n\n2. Authors did not compare the proposed method to any other methods beyond NGBoost and PGBM. Although there are very few works on uncertainty estimation for GBRT, one can come up with some naive ideas. For example, how about fixing the point predictions of GBRT as the mean of Gaussian distributions, and using a deep neural network to predict the variances of the Gaussian with maximum likelihood? What's the comparison of this naive solution to yours? I'm not familiar with PGBM, however, comparing only to PGBM (and NGBoost) is somewhat problematic.\n\n3. In probabilistic forecasting, calibration error (with sharpness) is really a popular metric. Can you consider using it in experiments? ECE and sharpness can provide more comprehensive evaluations.\n\n4. In calibrating variance, tuning $\\gamma$ and $\\delta$ on validation data is not a good solution. Can they be learned with a gradient descent?\n\n5. The model performance versus varying $k$ is worthy to be studied. What is the suggested $k$ for the neighbors' set? Can it be fixed for all datasets? It seems $k$ varies greatly across datasets, as shown in the Appendix. This makes me confused, and there may be no guidelines on choosing $k$.\n\n6. The minimum sample size in the leaf nodes is also a hyper-parameter in your model. What are its effects on the affinity calculation and the uncertainty estimation?\n\n7. The writing can be improved, especially the mathematical expressions.\nIn Equation (1), the sum starts from $t=1$, not $i=1$.\nEquation (3) and Equation (4) need to be modified. No potential negative societal impact.", " This paper proposes a new method to estimate data uncertainty in GBDT models. The proposed approach uses k nearest training elements to produce probabilistic predictions. Nearest elements are determined using the constructed ensemble: top instances are chosen based on the number of times they are in the same leaf with the test example. The method can be applied to any GBDT model after it is trained. Strengths:\n- The method can work with any GBDT model and can be applied to any trained model at inference time.\n- The reported results show that the proposed approach outperforms existing methods (NGBoost and PGBM) in terms of both RMSE and probabilistic evaluation measures.\n\nWeaknesses\n- The inference time can increase significantly.\n- The paper does not compare with the existing probabilistic prediction for GBDT implemented in CatBoost (see details below in Questions). 1. Regarding the baselines, note that in CatBoost, there is a loss function called RMSEWithUncertainty, which, similarly to NGBoost, predicts mean and variance. I think that adding a comparison with this implementation is important. In this case, by comparing CatBoost+RMSEWithUncertainty with CatBoost+IBUG, one can see the effect of the proposed method without possible effects caused by differences between GBDT implementations.\n\n2. Some training details are not clear to me. Namely, how exactly were the parameters tuned? Is it true that in Table 1 for NLL the parameters are tuned via NLL, for CRPS via CRPS, etc.? This seems to agree with Tables 5-6 in the supplementary, where we see different parameters for different measures. If this is true, does IBUG for RMSE coincide with the standard LightGBM algorithm?\n\n3. There are some important questions that are not addressed in the main text, but the corresponding experiments are in the supplementary materials. For instance:\n - To speed up the inference, it is proposed to sample trees uniformly at random. However, for GBDT models, it is known that trees at the beginning of the ensemble and at the end are very different. Namely, the first trees give the most significant contribution, while the tree structures at the end are closer to random. Thus, a particular sampling strategy is important. This problem seems to be addressed in Appendix C.3 (but I am not sure what “in ascending order” means here). I expect a more detailed discussion about this in the main text.\n - Another important question is whether the proposed approach of computing the similarity is better than a naive KNN applied to the original features? This question is not discussed in the main text but seems to be addressed in Appendix B.3. I expect to see more details on this in the main text. Also, it is not discussed whether feature normalization is performed before applying KNN.\n\nMinor:\n- In line 605 of the supplementary materials, it is written that one-hot encoding is used for categorical variables. This can be suboptimal for some GBDT libraries, e.g., CatBoost has its own way of dealing with categorical variables.\n\nThe paper is, in general, well written, and I noticed only minor typos in the text:\n- Line 87: “mean and variance is” (should be “are”), the same for lines 141 and 189\n- Caption of Figure 2: “means” -> “mean”\n- Line 252: “Table” -> “Figure\"\n Yes.", " Starting from gradient boosting trees, the paper develops a new method to estimate the conditional distribution P(y|x) for regression problems. The authors propose to estimate P(y|x) in a nearest neighbor approach, using a specific similarity measure. Two instances are considered similar if they end up in the same leaf in many of the trees. Using the neighborhood, the variance or the full conditional distribution P(y|x) can be estimated. The authors also present a calibration procedure for situations where the variance is wrongly estimated during training. \n\nIn the experiments, the new method is compared to two specific baselines on 20 tabular datasets, using NLL and CRPS as performance measures. Strengths:\n- The paper is well written and easy to follow.\n- Uncertainty estimation for gradient-boosted regression trees is an important research problem, because boosted trees usually yield SOTA performance on tabular datasets. \n- The experimental evaluation is quite extensive. \n\nWeaknesses:\n- The proposed method comes without any theoretical justification. The method is in essence a heuristic. \n- I would have liked to see a comparison with nearest neighbor methods that use other similarity measures. Below I am listing potential limitations. I welcome the authors to comment on my remarks. The presented approach is interesting, but I also see some limitations. \n\nThe presented approach is a very simple approach with limited novelty. It is in essence a nearest neighbor method with a specific similarity measure. Estimating the conditional distribution by analyzing the neighborhood of a test instance is a well-known approach in nearest neighbor research, so the only novelty here is the similarity measure, which is quite specific. I would not be surprised if this similarity measure outperforms Euclidean distance, because the presented similarity is computed on those features that matter for prediction. Especially for high-dimensional datasets with many irrelevant features this might be an advantage over Euclidean distance. However, in nearest neighbor research, many alternative similarity scores that also provide a solution for the curse of dimensionality have been proposed. I find it a pity that this literature is completely ignored. \n\nTo my opinion, the proposed similarity measure has at least one obvious shortcoming: it is a non-continuous function that results in many ties. I am wondering whether this performance measure is able to outperform some simple baselines that also overcome the curse of dimensionality, see e.g. select the most important features based on a variable importance criterion, and compute the Euclidean distance in the resulting lower-dimensional space. Overall, I would have liked to see more theoretical and experimental justification that the proposed similarity measure is the way to go. \n\nThe fact that the variance needs to be recalibrated using a validation set lets me conclude that the considered similarity measure leads to a biased estimate of P(y|x). More theoretical insights on what goes wrong would be useful. The experiments show that the new method outperforms some baselines, but that's what 99% of the Neurips submissions claim. These claims are hard to verify, thus some theoretical results would help me to believe that the proposed method is state-of-the-art. \n\nIn the experiments, the assumptions for using a paired t-test are not met. Individual numbers are not independent, because the training datasets overlap. If you want to compute p-values, please use the right type of test. See for example Dietterich \"Approximate statistical tests for comparing supervised learning algorithms\" and follow-up papers on that topic for using the right tests. \n\nNLL and CRPS are hard to interpret as measures for comparing different approaches. To my opinion, checking the validity of a predefined prediction interval is easier to interpret. This measure is commonly used in the conformal prediction for regression literature. ", " The authors develop IBUG, a straightforward approach for producing probabilistic predictions for gradient-boosted regression trees. The approach itself is very simple, but the point the authors are making is that this approach is useful: the authors do a very large amount of computational work showing the potential of their approach. The code the authors provide looks clean and easy-to-use.\n\nThe contributions of this paper are largely in engineering a good solution and then implementing that solution in an excellent way: there is no big methodological contribution. ORIGINALITY\n\n\"Originality\" is the weakest aspect of the paper. The authors don't cite Davies & Ghahramani (2014), but the \"Affinity Score\" in Equation (1) is equivalent to Section 3 in Davies & Ghahramani (2014). Even putting aside Davies & Ghahramani (2014), this paper is not terribly \"new\" or \"novel\": nothing in this paper is very surprising.\n\nHowever, \"originality\" is often overrated in research: the authors seem to have made a substantial engineering contribution.\n\nAlthough the authors may have missed Davies & Ghahramani (2014), they otherwise nicely document the literature. So the authors aren't (with the exception of the one paper) over-claiming the originality.\n\nQUALITY\n\nThe quality of the engineering is high. The authors have clearly done a lot of work in examining, (i) 22 datasets including 21 standard benchmarks, (ii) 3 performance metrics including NLL, CRPS, and RMSE, (iii) 3 different base models including LightGBM, XGBoost, and CatBoost, (iv) several types of output distributions. I checked the code the authors provided and it looks nice.\n\nCLARITY\n\nThe clarity of the paper is good. The paper is well-written and easy to understand. The code is also well-written and easy to understand.\n\nSIGNIFICANCE\n\nThe paper has potential to be significant because of the extensive engineering contributions. The (currently anonymous) code is available under an Apache 2.0 license, so my hope is that lots of others will have the opportunity to use the authors' work. As the authors write, there's a fairly big chasm between the ease-of-use of GBRT and the availability of probabilistic models, so closing that gap in an easy-to-use way is nice.\n\nMy concerns about significance are two-fold:\n\n(i) The authors only compare to other simple, easy-to-implement metrics for calculating probabilistic prediction on trees. On the one hand, the authors are right in saying that Bayesian models and more complex approaches like BART will not scale well and therefore are unlikely to be used very frequently, but on the other hand it would be nice to have comparisons between the authors' simplified approaches and the more intricate approaches.\n\n(ii) The authors claim \"better\" performance of IBUG, but I do have some concerns about this because (with trees) the goal should maybe be \"different\" performance of IBUG to other approaches. Retrofitting GBRT for probabilistic prediction is always going to be a bit of an art since the probabilistic prediction is missing from the beginning and really only added on later.\n\nAlthough I'm hoping that the authors will address these items in the rebuttal, they're not really show-stopping points. The engineering contribution here is rather extensive.\n\nMINOR\n\nLine 99: euclidean --> Euclidean\n Would the authors please check their work with respect to Davies & Ghahramani (2014) and document the differences? My impression is that, after considering the work of Davies & Ghahramani (2014), IBUG cannot claim anything methodologically new (except perhaps Section 5.4?), but happy to be corrected.\n\nWould the authors please mention any comparisons they have done between IBUG and the more intricate approaches they mention in Section 6?\n\nAre the authors happy to tone down the claims of \"better\" with respect to other competing methods?\n\nWhat is the sensitivity of IBUG to the parameter k? This is the number of nearest neighbors. This is fine." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "3M6NeFB0utd", "Dkf6p7JXv2n", "H5pQgwokEY", "ZwOOsyQoTEN", "ZwOOsyQoTEN", "0XlCCi70BkI", "0XlCCi70BkI", "0XlCCi70BkI", "Wy1ll5z7zY-", "nips_2022_v6CqBssIwYw", "nips_2022_v6CqBssIwYw", "nips_2022_v6CqBssIwYw", "nips_2022_v6CqBssIwYw", "nips_2022_v6CqBssIwYw" ]
nips_2022_zkQho-Jxky9
Counterfactual harm
To act safely and ethically in the real world, agents must be able to reason about harm and avoid harmful actions. However, to date there is no statistical method for measuring harm and factoring it into algorithmic decisions. In this paper we propose the first formal definition of harm and benefit using causal models. We show that any factual definition of harm must violate basic intuitions in certain scenarios, and show that standard machine learning algorithms that cannot perform counterfactual reasoning are guaranteed to pursue harmful policies following distributional shifts. We use our definition of harm to devise a framework for harm-averse decision making using counterfactual objective functions. We demonstrate this framework on the problem of identifying optimal drug doses using a dose-response model learned from randomized control trial data. We find that the standard method of selecting doses using treatment effects results in unnecessarily harmful doses, while our counterfactual approach allows us to identify doses that are significantly less harmful without sacrificing efficacy.
Accept
All reviewers agreed that this paper should be accepted because of the strong author response during the rebuttal phase. Specifically the reviewers appreciated the motivation of the paper, its clarity, and the author clarification of the method, its assumptions, and scope during the rebuttal. Authors: please carefully revise the manuscript based on the suggestions by the reviewers: they made many careful suggestions to improve the work and stressed that the paper should only be accepted once these changes are implemented. Once these are done the paper will be a nice addition to the conference!
train
[ "srvKUolIQMd", "LUyrUYxzhTZ", "ljVGxGg9g5T", "TKVBR-FIhv4", "CbPHfOJsSLB", "LHJtfKOkRtD", "QcBQgwuj01I", "GGwBj-monES", "cU2s9_PoWK8", "8DFXgo6ftK", "T0tGjTj2gc", "TxmAHB0HkL3", "pjw8uJ_sTuy", "LdkloKhW6ol", "ZpYJYK4E3d", "hYAVOQvTprP", "x0qwVvmjqa-", "D6JUI7Jx2J", "ay5fgXm0UKn", "msGhKUDcGg6", "1weI2UtOYcl", "Xu4C3ysVlZb", "Cs0Kd5nU25", "ukBJN4vegx2K", "HvwfbZFrdfG", "2nY7eeLywlc", "9KGFy-sFHM", "Lg0U1xPK_hL" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I revised my assessment of the work based on the answers provided by the authors.", " Thank you for addressing the comments and extending the submitted materials. The paper has been made more comprehensible.", " Massive thank you for your comments which have really improved the paper with the inclusion of the related works discussion. Also many thanks for considering the reviews and revising your score. ", " Thanks the authors for the very detailed rebuttal. \n\nBy reading the clarifications provided by authors, as well as those helpful discussions among other reviewers, I misunderstood part of this paper. I think the authors' rebuttal convince me, so I would like to increase my score to 6. \n\nThanks again for the explanation. ", " Massive thank you for taking the time to discuss this with us and improve our paper! Just to update you on the specific changes we made. \n\nAbstract updated, making it clear that we are translating a specific definition of harm, and that our results re: factual approaches apply to specific cases. \n\nLine 132-133 updated to reference further discussion of other definitions\n\nLine 328 updated to make it clear that these policies are harmful with respect to the CCA definition of harm\n\nLine 342 made it clear that these needlessly harmful actions are w.r.t Definition 5, which refers to the CCA definition\n\nConclusion: toned down the language to `are unable to avoid harmful actions in certain situations’. Refer to factual bounds on harm, limitations and related works (all of which are discussed in appendix K)\n\nAppendix B. Included discussion of two other definitions of harm\n\nAppendix K. A section discussing limitations of our approach including using counterfactual inference for measuring harm, and discuss bounding counterfactual harm using factual distributions. \n", " I think these are good changes that improve the paper, and I have updated my rating accordingly", " After some discussion we agree with your points and have tried our best to incorporate them in our paper. Will the following changes be sufficient?\n\n1. In all claims in the paper of the form `we show that no associative measure of harm can work', we will change these to `no associative measure of harm can satisfy the CCA definition' or `no associative definition of harm can satisfy our intuitions in the treatment example'. Which is better than us saying / implying `no other definition can work in all cases'. We rewrite the abstract, introduction section 5 and conclusion to reflect these changes.\n\n2. We have included a section in the appendix K (K.3. limitations) where we describe situations where using counterfactual inference to determine harm is not the best option, and given an example of a factual upper bound to the counterfactual harm so that the CCA can be used in these situations to ensure strict harm aversion even when counterfactual inference is not possible. \n\n3. We will expand our discussion of other definitions of harm in Appendix B (currently these other definitions are cited in the introduction but not expanded on beyond their criticisms of the CCH). \n\nThanks for your comments!\n\n\n", " > A definition must work in all scenarios, otherwise it has to be discarded\n\nI'm not sure how to make sense of this, since for example there are scenarios where the concept of harm does not even apply. Distinctions between definitions, methods, theorems, etc, are only conventions of professional writing in certain disciplines, that is not what my objection is about.\n\nThere are an abundance of purely associative definitions of fairness in the literature. There are many causal examples showing how these associative definitions can be \"wrong\" under some scenarios. But I have never seen anyone arguing that all associative definitions have to be discarded because they don't \"work in all scenarios.\"\n\n> Any attempt to maximise the treatment effect by maximising P(y | x) rather than P(y | do (x)) is guaranteed to fail in some environment\n\nYes, but nobody has yet banned all health research (for example) that is based on associations, or argued that we cannot even *define* \"risk factors.\" \n\nNow if we just take this argument one more step up the ladder we reach the current disagreement. You have given a class of examples where an interventional definition would be \"wrong,\" but this does not mean it's impossible to propose or use such a definition in other scenarios.\n\nLet me reiterate a point in my initial review to contrast with the definition of counterfactual fairness. In that setting, since a person's sensitive attribute (e.g. race, sex) is already determined, it is *necessarily counterfactual* to consider what their other variable values *would have been* if their sensitive attribute had been different. An interventional definition in that setting would necessarily have to focus on some shallow/minimal intervention like manipulating the perception of race based on changing names on a CV, without actually modeling how else that person's entire life (and hence CV) might also have been different in a counterfactual world where they had a different race (and not just a different name on their CV). By contrast, when we are considering harm we can be thinking about examples that are entirely based in the future (like whether to build a doomsday machine), where the facts have yet to be determined and hence the reasoning involved is not necessarily counterfactual.\n\n> Our paper proves by counterexample that a purely interventional definition of harm is not possible\n\nHence I must continue to disagree with the above statement. ", " Dear Reviewer 8Y89\n\nThe discussion period is coming towards its end. We wonder whether you had the chance to check our rebuttal and see if it clarified the interesting issues raised in your review (as we hoped). Otherwise, we will be happy to follow up and provide further elaboration on unanswered concerns and burning questions.\n\nWe certainly appreciate your time and attention. Thank you!", " I think this misunderstanding is due to us using a single world differently. One important point that I hope we can agree on is the distinction between a definition and a method. A formal definition must work in all scenarios, otherwise it has to be discarded. Methods dont have to work in all scenarios and in general cant (no free lunch theorems etc). For example a definition of the treatment effect must match our intuitions for how a treatment effect behaves in all situations (e.g. they are zero if there is no directed path from X to Y), and no associative definition of the treatment effect robustly capture the treatment effect. But there are associative methods for estimating treatment effects that work in some situations. In your comments you refer to a causal definition of harm, and Im going to assume you mean a definition as described here. If what you meant was that rung 3 inference is not always necessary to measure harm, then I agree and please skip to the last paragraph. \n\nAssuming you mean definition...\n\nRunning analogy. Imagine our paper rather than defining harm was defining the treatment effect, in terms of distributions P(y | do (x)). We show that any associative definition cant work, by using the obvious example where X and Y are correlated but purely by a common cause, so while P(y | x) \\neq P(y | x’), the treatment effect is zero for all x, e.g. P(y | do(x)) = P(y | do(x’)). This is a “weird example” because in the general case there will be both direct and common causes between X and Y. Nevertheless, it is sufficient to show that there can be no definition of causal effect that is purely in terms of associative statistics that matches our intuitions. \n\n“I do not agree that a counterfactual definition is always necessary or always the best”. \n\nYou claim this based on an example where the counterfactual harm reduces to an interventional measure (if you calculate equation (6) in our paper for your example, you get H(A = 1) = E[U| do(A = 0)] - E[U| do(A = 1)], which matches our intutions, see end of rebuttal). Consider the analogous situation with defining the treatment effect. There are some scenarios where causal effects reduce to associative distributions P(Y | do(x)) = P(Y | x) (i.e. no confounders). This does not mean that the causal definition of the treatment effect is not always best, or “not necessary”---definitions are definitions and must work independently of the situation. There are situations (e.g. pure confounders) where any associative definition of the treatment effect will fail. The same is true for rung 2 definitions of harm as we prove by counterexample. \n\n“What this example shows is that we do not need an SCM and counterfactual reasoning in order to choose the less harmful course of action”\n\nThe same is true in the treatment effect analogy. In some situations, we dont need interventions to measure treatment effects. Any attempt to maximise the treatment effect by maximising P(y | x) rather than P(y | do (x)) is guaranteed to fail in some environment (e.g. those where X and Y are only related by a common cause). \n\n“it would be useful context for a reader to know that an interventional definition of harm could be pursued and may be good enough for other examples”. \n\nOur paper proves by counterexample that a purely interventional definition of harm is not possible (definition being the main word), and claiming this in our paper would contradict all our main proofs. But again we think what you mean here is causal methods for estimating harm which we discuss now. \n\nLast paragraph... (assuming you mean method)\n\nWe agree that counterfactual inference will not always be necessary to measure harm, as there as some situations where the CCH can be evaluated using rung 2 inference (as you have described). This is explicitly explored in section 5 where we consider interventional objective functions that are harm averse in a single environment. But the point of section 5 is to show that any such measure is not robust to the situation changing, and there is always a situation where they fail (analogous to associative measures for treatment effects). For example, see the third paragraph above). We agree that simply evaluating the full counterfactual definition isnt the best method for estimating harm in all cases (e.g. when the SCM isnt known). What we will do is include a note in the paper describing\n\n1. When does the CCH definition reduce to a rung 2 definition\n2. What are rung 2 measures that tightly bound the CCA in all situations (basically allowing one ensure harm aversion without needing and SCM)\n\nIs this what you are requesting? Just because time is tight we include a calculation of counterfactual harm in your example, just incase this is still a sticking point\n\nY = 0, world end, Y = 1 work doesnt end. Y = 1 - A (world ends if we push the button, doesnt end if we dont). P(Y_{A = 0} = 1 | Y_{A =1} = 0) = 1. So H(A = 1) = U(A = 0, Y = 1) - U(A = 1, Y = 0). ", " The world ending example being deterministic does not resolve the issue with the example. Interventions, counterfactuals, and harm are all concepts which are not necessarily probabilistic. What this example shows is that we do not need an SCM and counterfactual reasoning in order to choose the less harmful course of action. So, again, a counterfactual definition is not always necessary.\n\n> we can see the factual approaches cannot work for all cases\n\nNo approach can work for all cases. No free lunch, all models are wrong, etc.\n\n> One of the the main results of the paper is a proof that rung 2 or 1 definitions of harm won’t work\n\nIf this said \"won't work for some broad class of examples and under some additional assumptions\" then I would agree. There are cases where a counterfactual approach seems necessary and clearly the best, hence the value of this paper and the reason I did not choose any rating like \"reject.\" But I do not agree that a counterfactual definition is always necessary or always the best. The lesson of Pearl's ladder is not that \"higher is always necessary or better\" or \"lower is always wrong.\" Someone who cares more about the epistemological cost of making assumptions could just as well invert the ladder and say that counterfactual reasoning is the lowest, worst form of reasoning because it relies most strongly on assumptions.\n\nAlso again: I am not requesting any significant change in the paper. When I initially asked Q1 in my review, I was not asking for the paper to be re-written about causal harm in general. It is fine to focus on rung 3 and argue for its necessity in some contexts as this paper does. I just think it would be useful context for a reader to know that an interventional definition of harm could be pursued and may be good enough for other examples, even though that is not the choice of the current paper. And I think this context is important given this paper argues against an interventional definition in some contexts, hence some readers may mistakenly conclude an interventional definition would always be wrong.", " This paper proposes a statistical definition of harm called counterfactual harm and creates a structural causal models framework that incorporates the harm into algorithmic decisions.\nI see two potential ethical concerns related to the paper: 1. over-simplifying the actions of the agents to binary and 2. potentially unable to be applied to all situations. \n\nRegarding the two thieves example, on top of either robbing or not robbing Bob, Alice does have the option of warning Bob, calling the police, stopping Eve etc. These additional non-counterfactual actions have the potential to have higher utility, and thus making Alice robbing Bob not the most morally relevant action, but merely better than doing nothing (not robbing Bob). I agree that using such counterfactual harm SCM framework could potentially mislead agents into making non-optimal decisions when there could be multiple actions or if counterfactual is unknown.\n\nWithout undermining the contribution, the paper can discuss the simplification of the action space/ limitation, that the SCM framework only evaluate the least harmful action in relation to the single counterfactual, and that the model is more applicable to scenarios where the agents’ actions are binary and counterfactuals are known. When the agents have multiple possible actions, they should perhaps repeatedly evaluate each default action with its counterfactual. Overall I see this could be addressed by the authors by a relevant discussion in the paper/ appendix. The authors have not acknowledged the potential oversimplification of the examples in the paper. The authors can discuss the simplification of the action space as limitation in the paper, and provide recommendation on real life scenarios where the agents could face multiple actions/ decisions and how they should adapt the SCM framework in those scenarios. ", " Thank you for clearing this up and apologies for misunderstanding your point. In our previous rebuttal we interpreted your comment as saying that this was a problem with our approach specifically, rather than this is a problem with all algorithmic approaches to making morally relevant decisions, and that it is precisely for this reason that the issue should be discussed in the paper. This was an error in communication on our part, and we have included a discussion on the limitations of our approach in appendix K, including (in the first section) a discussion clearly stating that our results do not resolve the issues you raise. \n\nWe totally agree with you on these points. In fact the entire motivation behind writing this paper was to point out another yet-unseen failure mode for any algorithmic decisions (not using counterfactual reasoning, as humans do, to determine harm). This we think is an important point, because no matter how good models become (even if they are better than humans—e.g. the AI knows that it can warn Bob in the market, but Alice does not), then it will still make mistakes that no reasonable human would. There is a tension where presenting a solution to one problem can be misconstrued as a solution to all problems in algorithmic harm (model misspecification, robustness, etc)—we hope it is now clear in the paper that this is not the case. \n\nNext we will address the world-ending machine example. \n\nShort answer: The example you give is not a counterexample to the counterfactual definition. In this example the counterfactual harm reduces to the treatment effect as the outcomes are deterministic. So the counterfactual harm is equivalent to the proposed causal measure of harm. Also it is not correct that counterfactual analysis requires more `historical experience' than interventional analysis, they are both defined over the same action-outcome space (A, X, Y). \n\nLong answer: First we note that making rung 3 counterfactual inferences requires no more or less access to historical examples as making rung 2 causal inferences. For example, if we wanted to estimate the causal effect from data we would have to push the button multiple times. As we cant do this we have to ask where the inference “if I press the button A = 1 the world will end” comes from. As this is not a repeatable experiment the inference must come from a predictive model (e.g. we know enough about how the device works and how the universe works to know that the device will destroy the universe). So whatever case you choose (expert reasoning, causal inference, counterfactual inference), we have to assume there is a model. Given this model we calculate the expected counterfactual harm as follows—given that I press this button, sum over all factual outcomes that can occur (in this case, the world ends with certainty) and for each of these compare the factual utility to the counterfactual outcome that would occur if I did not press the button (the world not ending with certainty). Because of the deterministic outcomes, the counterfactual harm is just the difference in expected utility given do(A = 1) and do(A = 0). So you see that the counterfactual definition of harm works perfectly well in this setting. We can also describe what happens for indeterministic outcomes but it is similar to the treatment example. \n\nRe: Q1b, the utility function U(y) is just taken because its simple and intuitive (the users prefer living over dying, and have no intrinsic preference for one treatment over the other). Its easy to come up with equivalent counterexamples for any U(a, y) (this is what we do in section 5), but we kept it simple here to try and point out that in this intuitive setting (people like living, dont like dying, two treatments) we can see the factual approaches cannot work for all cases. One of the the main results of the paper is a proof that rung 2 or 1 definitions of harm won’t work, so we disagree that a counterfactual definition is a choice on equal footing with a causal definition. \n\nAlso note that if there was no counterfactual measure that robustly captures harm, there would be no interventional measure, as counterfactuals refine interventional distributions. So an equivalent proof in the opposite direction is impossible. To see the relation between counterfactual harm and any intervention definitions, note that the counterfactual harm reduces to the difference in expected utility under different interventions if you make the additional assumptions; `counterfactual independence’, i..e that P(y^*_{\\bar a}, y_a) = P(y^*_{\\bar a})P(y_a), or deterministic outcomes being two examples. We can state this in the text if you think it would be useful to the reader. \n\nWe also just want to clarify that the path specific harm is not intended to resolve model misspecification problems but to point out that in some cases what we mean when we talk about harm is a path specific variant of harm.\n\n", " We would like to really thank the reviewer for revising their score and for their helpful comments. \n\nFollowing the reviewers comments we have included a comment on line 145 and on line 69 to make it clear that we are assuming no unobserved confounders when deriving our theoretical results (sorry for missing this correction in our last update). We have included a discussion on this limitations due to this assumption in appendix K (paragraph beginning line 1228). We have also expanded appendix K to discuss the limitations of our work including those raised by the reviewer, and to include some examples of how our framework might be applied to some existing implementations. ", " About non-counterfactual definitions, the reply has not really addressed my point.\n\nConsider this example: I have been asked to provide an ethics audit for a company which is building a doomsday machine that will destroy the universe when someone presses the activate button A = 1. This technology has never existed before, so there is no historical experience for us to consider any counterfactuals. Clearly the machine will do harm if activated, and clearly an interventional definition of harm would capture that.\n\nIn the reply to Q1b, you provide a counter-example based on a strange utility function. That example shows an interventional definition of harm would not be appropriate for all possible examples. I have just given an example which shows a counterfactual definition also would not apply to all possible examples. My point is not to push the authors to change the proposed definition in this paper, but just to see that it is a specific choice to focus on counterfactual reasoning. The paper could more clearly communicate how this choice is specific and has limitations. It could, for example, mention that SCMs with an interventional definition of harm may be more appropriate for some examples.\n\nMy previous comment about acknowledging/communicating limitations also applies to Q2. Again, I am not asking the authors to change the proposal. Formal models, like SCMs or any mathematical models, are precise, narrow, technical, rigorous, etc, but also always wrong in some important ways because the real world is open ended and does not have clear boundaries and definitions of variables, etc. If we take the open ended principle to \"do no harm\" and make it formal with an SCM that implies Alice should rob Bob, that shows the formalization process went wrong somewhere. Any reasonable human who hasn't been indoctrinated with mathematics would resolve that example by saying Alice should warn Bob about Eve (and also not be out looking to rob anyone in the first place). The lesson of this example, in my opinion, is not that some path specific definition in the same SCM resolves the example, but a reminder that any formal model (causal or not) could fail to capture something important about what people mean by \"do no harm.\"\n\nEven with no formal model, I think a less abstract example would make the point more clear (instead of one about Alices and Bobs). I have been hired by a giant tech monopoly or superpower government to make its algorithmic systems less harmful. It is designing a system to automatically steal money from everyone, and it wants to A/B test whether to steal 100 currency units or 101. Those are the only options within the technical specification. Since I am human, I can make choices outside of a technical specification. My professional choices are to (A) proceed to write down an SCM and formally prove that stealing $100 does less harm, report this to my manager, get rewards and accolades and advance in my career, or (B) be an actually moral actor and challenge the technical specification, cause problems for my manager, possibly risk my career advancement, etc. Again, I think any reasonable understanding of \"harm\" that hasn't already been restricted to a mathematical model would not struggle with which action is morally correct. And because the example is no longer abstract but includes a dimension of professional advancement it also illustrates the danger of having formal models that we can use to rationalize morally objectionable systems.\n\nTo reiterate in conclusion: I know this is not a limitation of this specific paper. I do not expect the authors to provide a complete rigorous methodology that does not have any of the shortcomings that all other rigorous methods suffer from. All I would expect is some clear acknowledgement and communication about limitations.", " We would like to really thank the reviewer for their comments. We agree that the paper would be improved by giving a more specific explanation of how our formalism could fit into these more complex domains. Following their recommendations we have significantly expanded appendix K to include two examples of recent works where our framework can be applied to complex domains, including details as to how the counterfactual harm can be calculated in these settings. \n\nThe first example deals with complex state spaces (medical imaging). We describe how the counterfactual harm can be estimated in a recent study looking at counterfactual inference for CT scans of patients with multiple sclerosis\n\nThe second example looks at another axis of complexity---sequential decisions. We describe how our harm averse decision making (section 4) can be applied a recent paper on counterfactual inference in Markov decision processes. This study looks at determining optimal treatment policies for patients with major depression. We describe how harm averse policies can be learned using an extension of the Bellman equation that factors in the expected harm. \n\nWe have also included a section in appendix K describing related work in fairness and AI safety / ethics\n", " Thank you very much for addressing the comments. Explaining further indeed helped to clarify. On Q1 and Q2, however, after reading the added Appendix K, perhaps, further, albeit brief, explanation would help. While it is indeed the case that we do not (and may not at all) have access to the perfect causal relations in complex domains and, thus, can only rely on approximations, explicitly, how does this connect to the proposed model? Using for instance from the simple treatment-outcome set-up with binary values to more a complex scenario as hazard management.", " Dear Reviewers! Thank you so much for your time on this paper so far.\n\nThe authors have written a detailed response to your concerns. How does this change your review?\n\nPlease engage with the authors in the way that you would like reviewers to engage your submitted papers: critically and open to changing your mind. Thank you Reviewer rBDL for your initial engagement!\n\nLooking forward to the discussion!\n", " Overall the authors answered kindly although my questions are somewhat vague. \n\nI mentioned Trolley problem because it involves \"inaction\" vs \"action\" of the agent, not just choosing \"which lanes\".\n\nI couldn't find added explanation about the Q6. Causal sufficiency (Markovian) is typically assumed in causal discovery to narrow the scope but not for general causal inference where an independent noise implies trivial identifiability. Even [7] does not mention independent noise. Researchers might model independent exogenous variables but each exogenous variable may affect multiple endogenous variables (semi-Markovian).\n\nQ7 was about whether U should be defined for each individual. Currently the utility function is fixed and individuals in the population (i.e., n data points) share the function but individuals may have different utility functions (and, hence, utility values).\n\nI still don't understand \"SCMs can be learned for real systems.\" Maybe it's because the authors restricted to a Markovian case?\n\nMy \"initial\" decision to recommend rejection is based on my counterfactual reasoning whether the scientific community would be benefit from the results in this paper. I liked the objective in its current counterfactual form. Results are sound and representation are good. Theoretical contribution seems fair. But, are the real-world application and complex domains free of unobserved confounders? Can you confidently say that this work \"shows an immediate practical application with models currently in use in harm-sensitive applications.\"\n\nAnyway, I understand that my initial recommendation to reject is partly due to misunderstanding, and will raise to weak-accept. But I wish the authors recheck the assumptions and its implications.", " We thank the reviewer for their thoughtful feedback. \n\nQ1. How to scale up to a rather complex domain (e.g., hazardous environment in which human and AI teamp up to navigate the situation) is worth discussing.\n\nWe agree this point was missing from the paper and we have included a new appendix K to discuss this. The example you give is described in recent paper that we discuss [1]. \n\n[1] Parvaneh, Amin, et al. \"Counterfactual vision-and-language navigation: Unravelling the unseen.\" Neurips 2020. \n\n\nQ2. the discussion on related works seems to miss why the current literature has not addressed (harm) until this time, whether previous works have...falling short, and how close (or far) current deep learning frameworks on counterfactual reasoning relate to this work.\n\nWe agree this is lacking and have included in appendix K (and discuss alternatie definitions that fail in appendix B)\n\nQ3. Further, it is unclear how the causal constructs elucidated here accounted for or will account for human feedback, e.g., Alice's preferences, clinician's sense of values or human demonstrations in general (objective functions, which for example provably beneficial AI methods account for).\n\nWe discuss this briefly in example 4 and the paragraph beginning line 319. As cooperative inverse reinforcement learning and RL from human feedback use factual reward models, they cannot be robustly harm averse. consider the treatment example from the introduction. An agent can be trained on feedback that treatment 1 is good. Then a distributional shift can result in treatment 1 behaving like treatment 2, without any of the factual statistics changing. It will still maximize the factual reward. \n\n\nQ4. If indeed...counterfactual-based harm-averse framework can account for more complex harmful situations, it would be good for the authors to make this explicit. \n\nWe have included a note on this in Appendix K, describing existing work on counterfactual inference in complex domains where our definitions should be applicable. Our no-go theorems (theorems 3 and 4) apply to these more complicated scenarios (for example, if harm aversion is impossible in single decision tasks it is by extension impossible in MDPs). \n\n\nQ5. The claim that agents trained to maximize factual objective functions are guaranteed to proceed with harmful policies, albeit they are allowed to retrain, amid distributional shifts (L332~L334).... Secondly, however, why would this claim hold even if the agent can retrain itself?\n\nTheorems 3 & 4 are different to what you describe here. What we show is that for any factual objective function, there exist environments where maximising this objective functions results in needlessly harmful actions. For example, consider an agent that minimizes the L2 norm in all environments. Maybe this results is low harm actions in the training environment, and the designers conclude `the L2 norm is a good harm averse objective’. However, we prove that there are shifted environments where minimising the L2 norm results in needlessly harmful actions. This is not due to generalisation error, but “objective misgeneralization”. By allowing to retrain, we make the generalisation error zero so as to make this clear. Consider again the treatment example from the introduction. Let in the training environment both treatments behave like treatment 1 (e.g. f_Y(t=2, e_Y) -> f_Y(t=1, e_Y), so no harm). In this environment, both treatments cause not harm and any factual objective (e.g. maximising the treatment effect) will be harm averse. Then imagine a distributional shift where now f_Y(t=2, e^Y) returns to its original (harmful) form. Note the factual outcome statistics do not change under this shift. So T = 2 will still be optimal w.r.t the treatment effect (or any other factual measure), but is now harmful. So a needlessly harmful action belongs to the optimal set. The theorem shows this is true in general. \n\n\nQ6. How about the framework being able to learn amid contextual, P(x; M), changes? In otherwords, when distribution shift is not just on the outcome distribution, but also on the input (e.g., covariate shifts) or to both (concept drift)? \n\nAs we allow the agent to re-train following the distributional shift, covariate drive and concept drift are not relevant as the outcome distribution is essentially known by the agent following the distributional shift, and this is the only distribution required to determine the optimal policy (e.g. once P(y | a, x) is known, the optimal policy is independent of P(x)). \n\nQ7. Why was the requirement that agents have some fixed harm aversion relaxed, and tapered to requiring only that the agents do not take needlessly harmful actions?\n\nThis is to strengthen the theorem. Otherwise, maybe there are heuristics for approximating harm using factual statistics that get it approximately right (resulting in some approximation of the desired harm aversion). The theorem proves this is impossible\n\n", " We thank the reviewer for their thoughtful feedback and are encouraged by their description of the the topic of the paper being “important and significant” and our approach as being “rigorous” and “general”. We address their comments and questions below. \n\nQ1 The paper only provides a simple illustration for demonstration, without any real-world experiments & comparison. \n* The effectiveness of this work on wider domains still needs further validation, especially with the experiments on real-world datasets. \n\nIts not correct that we do not provide experiments using real-world data. The illustration in section 6 uses a GAM model that is learned on real world experimental data (a meta-analysis for randomised control trials), and we have made this clearer in the text. By comparison, in the neurips paper “counterfactual fairness” which the reviewer compares our work, the authors of this paper illustrate their definition of fairness on a fictitious model with purely synthetic data, whereas our results are demonstrated on a model learned on real data and which is used in practice to determine optimal doses of the drug Ariprazole. We chose a simple demonstration deliberately, as it shows that even in simple settings harm aversion results in very different policies to harm-indifferent optimisers. In more complex demonstrations it would be less clear if this effect is due to the specifics of the model architecture. (for more complex implementations see appendix K)\n\n\nQ2 The analysis setups are too specific, even though the harm definition is general by itself. This could be limited when applying such quantification scheme to a general ML settings.\n* Personally, I think the proposed work could be limited since it is hard to directly employ such harm quantification to mitigate the \"harmful\" patterns existing in commonly used models, such as neural nets.\n\nFollowing your recommendations we have now included an Appendix K which details how our framework could be applied to several recent setups where counterfactual inference is employed in complex real-world settings (e.g. using deep structural causal models). This includes in sequential decision making with an RL example, and in medical images. \n\nWhile our main demonstrations uses GAMs and these are not deep learning models, they are the most common models used for decision support systems in areas where our results will have most impact---namely in clinical trials, epidemiology, and social policy. These models are used in these fields precisely because they are simple and interpretable, and the risk of harm from model misspecification for complicated models is so high\n\nFinally, there is arguably a catch 22 problem here—The main reasons for developing deep learning systems that can support counterfactual inferences are given by theoretical results such as ours that show counterfactual inferences are necessary for practical problems (like harm aversion). \n\nQ3. I’m wondering how we can apply this definition (harm quantification) to a general ML setting, so as to mitigate some potential biases learned in models? Is the SCM a necessary setting for incorporating the proposed family of objective functions?\n\nWe use SCM models to derive our theoretical results as is common practice in Pearlean causality. For example the do calculus is derived in the SCM formalism but is applied to a widely, including deep generative models. We discuss in appendix K the potential limitations for this approach in practice, and how these are being overcome in several recent works that approximate SCMs and counterfactual inference without having to know the true underlying SCM. This includes deep learning methods and bounds on counterfactuals. \n\n\nQ4. In a more specific setting (e.g., fairness domain), can we equate or connect the harm quantification to some existing measurements (e.g., counterfactual fairness -\"Counterfactual fairness.\"? If yes, then what is the key difference for the proposed harm quantification?\n\nWe have included a discussion of the relation between counterfactual fairness and harm in appendix K. Briefly, these measures are different both technically (type vs actual causality, fairness is about estimator for outcome Y rather than outcome itself). On a conceptual level harm is distinct from fairness. For example, a treatment could harmful but administered in a fair way that doesnt discriminate based on a protected attribute. We discuss how they can be used in tandem (e.g. identifying if harm has been caused by an unfair decision). \n\nQ5. The authors should conduct a more detailed discussion between this work and other existing frameworks for quantifying \"harmful\" concepts (e.g., fairness, privacy). \n\nFollowing the reviewers suggestion we have included a discussion of this in appendix K. \n\n\n", " We thank the reviewer and look forward to continuing the discussion with them. Respectfully, we would like to raise a couple of concerns with the review. \n\nFirstly, the reviewer misresports the basic results of the paper (e.g. that we don't use a learned model for evaluation, that our results reduce to a trolley problem, etc), and we hope that in light of this the reviewer will consider revising their certainty score. \n\nSecondly, our paper is ranked as `good’ for soundness and presentation, and ‘fair’ for contribution, but then a high confidence rejection, which is due to its use of counterfactuals (due to a misinterpretation of the non-identifiability theorem) and equivalently its use of SCMs (which are also non-identifiable). Note that this grounds for rejection would apply to hundreds of papers that have been published at neurips and comparable venues. It appears the main reason for rejecting is because we use this established formalism, rather that due to the results themselves. \n\nQ1. It is unclear why we should define harm and benefit separately by taking max(0, difference in utility)\n\nThis point is explained on line 191 immediately after the definition. It follows from the CCA definition of harm: harm (benefit) is the increase (decrease) in utility under the counterfactual action. The max splits the counterfactual utility distribution into positive (harmful) and negative (beneficial) components with the factual utility as origin. \n\nQ2. In the example in the introduction, I understand that why doctors would favor Treatment 1 vs 2. This feels so much like a trolley problem ... What’s your explanation? In HPU, U is -5, -1 and harm is -1. So with lambda 4, there is no difference between switching lanes?\n\nWe are not sure how the reviewer has arrived at these figures. A worked example is clearly given in the text (full derivation in Appendix F). The expected utility is the same for both treatments. The harm is zero for treatment 1 and 0.1 for treatment 2. Therefore for any lambda > 0, treatment 1 is preferred. This is not equivalent to a trolley problem, as the factual outcomes for the two treatments are identical. \n\nQ3. Simply the P terms in Eq 6 and 7 are not identifiable not only from factual outcome but also from any experimental data. (Line 235–236)\n\nNon-identifiability is a well known result, but does not reduce the soundness or applicability of our results. Counterfactuals are necessary: As we show with the treatment example (now page 4), any factual measure of harm violates basic intuitive properties of harm. This example describes two treatments, one that is intuitively harmful and one that is not, but which have identical factual outcome statistics, i.e. P( Y = y | T = 1) = P(Y = y | T = 2). Any factual measure of harm is a function of these outcome distributions, and must assign the same harm values to T=1 and T = 2. Hence counterfactuals, and their required assumptions, are necessary when dealing with harm. The main definition of harm in ethics is also explicitly counterfactual. Secondly, good SCMs can be learned for real systems. For example, “A ball is dropped and bounces X high. If it was dropped from twice the height, it would have bounced 2X high”. This is counterfactual is non-identifiable but is defined w.r.t a mechanistic causal model (the classical equations of motion). Clearly, this SCM can be learned. Finally, we point out that human ethical decision making uses counterfactuals, which must make the same assumptions. Making these formal and explicit in SCMs can only improve upon this.\n\nQ4. For the same reason, the authors used an existing model not the data. What would be the implications of using a learned model?\n\nThis is not true. The model we use in section 6 was learned from real-world data (a meta-analysis for randomised control trials), as described in the text. \n\nQ5. It is unclear how benefit or harm are defined in medical or related research.\n\nWe use the predominant definition of harm (the CCA) which is widely cited in the philosophy of medicine, e.g. [1]\n\n[1] Engelhardt, H. Tristram. \"The concepts of health and disease.\"\n\nQ6. Def 1, why should noise be mutually independent?\n\nThis is a standard assumption (causal sufficiency) used in the definition of SCMs, described in the cited materials. We have added an explanation\n\nQ7. Is utility fixed over the entire population? A is the agent's action but U is user's utility function. User != Agent. (Y would also be user's outcome...)\n\nWe are not certain what population the reviewer is referring to. The setup is the same as in standard expected utility theory, and the utility function is an individuals (the user). We cant understand the rest of the question and ask for clarification.\n\nQ8. What's the practical importance of Y in defining harm and benefit when you have utility U? Y exists only to be marginalized out...\n\nstandard in expected utility theory. Needed to describe outcome distributional shift independently of utility shift\n", " We thank the reviewer for their thoughtful feedback and are encouraged that they describe our work as tackling a problem that is practically important, and describing the manuscript as clear, interesting, and likely to generate a lot of discussion. \n\nQ1 a: Is it really necessary to focus on counterfactuals to define an adequate notion of harm? \n\nTo answer this question we reiterate a key example that was perhaps not emphasised enough in the text (Example 1 the introduction & appendix F), where we show that any measure of harm that is based on factual inference (and so doesnt make these assumptions) violates basic intuitions about harm (not being able to tell the difference between clearly harmful and clearly non-harmful actions). From this we argue that as these assumptions are necessary for dealing with harm, they aren’t weakness of our specific definition but of any definition that can satisfy our basic intuitions about harm. \n\nThe example describes two treatments, one that is intuitively harmful and one that is not harmful, but which have identical factual outcome statistics, i.e. P( Y = y | T = 1) = P(Y = y | T = 2). Any factual measure of harm is a function of these outcome distributions, and must assign the same harm values to both. Of course, one could choose an objective function J(t, y) that depends on T, and implicitly favours T = 1. But then one just has to observe that under the distributional shift where the causal mechanisms for Y are swapped f_Y(T = 1, e_Y) <--> f_Y(T = 2, e_Y), this utility function will favour T = 1 which is now the harmful treatment. So even before we get to the predominant definition of harm (the CCA, which is explicitly counterfactual) we see that any factual measure will not work. \n\nQ1 b: What if we replace the counterfactual distribution in formula (6) with an interventional distribution? \n\nTo see what happens if we replace the counterfactual formula in (6) with an interventional distribution, consider the case where U(a, x, y) = U(y) and there are two outcomes Y=0 and Y = 1, where Y = 1 has the higher utility. Replacing P in (6) with its interventional equivalent P(y_a | x) instead, i.e. the factual outcome distribution, reduces the harm to P(Y_a = 1 | X = x)(U(Y = 1) - U(Y = 0)) which is the casual effect of do(A = a) times the utility difference. So the harm would reduce to a treatment effect, which as we have shown in the above example is not suitable for capturing harm (e.g. it cannot differentiate between treatments 1 and 2 in the motivating example). \n\nQ2 a: Is it too narrow to operationalize harm in a (simple) SCM framework? [+ question about the two thieves example]\n\nWe use the SCM framework to derive our theoretical results and definitions as is standard in Pearlean causality (e.g. all the main results of the do-calculus are derived in the SCM framework), and doesnt limit our theoretical results to simple systems. For practical applications, there are lots of recent works (including those published at neurips) that extend SCMs to complex domains (for example, deep structural causal models, which have been applied to medical imaging). We discuss these in relation to our work in a new appendix K. \n\nIn the two theives example, if Alice’s default action was to warn Eve then the harm value would be higher. So to determine harm we need to have a model that incorporates the desired default action. That models with oversimple action spaces may result in bad decisions is true for any model-based methods, not just our result—It would also be true for the proposed `interventional harm’. For example, treatment effects cannot be used to determine the best treatment if we miss important available treatments from the analysis. \n\nQ2 b: Similarly, a policy optimised to avoid harm according to a model built on one set of variables could cause harm involving other variables which were not included in the model....however, the proposed narrow definition of harm would not address such issues.\n\nThis can be applied to all methods that use statistical modelling in decision theory. When a doctor makes a clinical decision based on a counterfactual inference (i.e `the patient would have responded better to a different treatment’), they are necessarily basing this on a mental model that includes untestable mechanistic assumptions (in order to support counterfactual inferences). This is an inescapable fact, but clearly they are capable of doing it (good inductive biases and heuristics for learning SCMs exist), and while it is possible that causal models make mistakes due to wrong assumptions and oversimplifications, the same is true for these mental models that support a huge number of human decisions in medicine, law, etc. We believe that making these models formal and explicit is a route to reducing the effect of these errors rather than increasing them.", " This paper doesn't raise direct ethical issues as listed in the NeurIPS ethical guidelines, hence the \"no\" above. However, the statistical approach used for a definition of \"harm\" has its limits when combined with the philosophical one of the CCA approach, which is already controversial in the field (eg see: Carlson, 2019). Moreover, I am afraid the authors do not clearly distinguish between questions of bioethics (like in the examples and appendix), where the notion of \"harm\" is more easily defined, and questions of AI ethics/machine ethics and their complexity. The definitions given in the introduction about \"harm\" confuse ethics and law and have a somewhat superficial ground. The real-life applications of this method could be ethically sensitive, which seem to work on a logical level but would be hard to imagine applied to AI. Limitations of the approach are not addressed, and the current state of the paper shows a somewhat superficial literature review regarding \"harm\" and its definitions, which is the main object of the research and doesn't give enough strength to the main argument. It is important not to reduce the impact of AI to one single statistical measure but to consider the real-life situations and the sociotechnical background in which AI operates, or at least acknowledge this aspect in the research. It would help the readers to discuss the reasons for choosing the CCA approach instead of other normative frameworks because, in the current state, it seems that the choice has been given by the simplicity of adapting it to statistics instead of a reasoned and justified ethics choice. It's important not to underestimate the concept of \"harm\" in AI, and to argue and justify using an ethical framework instead of another. The concept of \"harm\" could also use other references in its definition (in both ethics and ML literature).", " This paper uses a structural causal modeling (SCM) framework to define a statistical notion of harm which the authors name counterfactual harm (CH). The statistical definition extends a philosophical one named the comparative counterfactual account, which states that an action causes harm to a person if that person would have been better off had the action not occurred. The paper introduces a harm penalized utility (HPU) objective and compares the optimal actions according to this objective with others including an unpenalized utility and an objective that formalizes risk-aversion (penalizing the second moment) across several illustrative examples. The paper argues that avoiding harm requires causal reasoning, and specifically counterfactual reasoning, and shows that optimal actions according to other objectives can lead to causing harm. It presents several definitions and theorems to generalize this argument from the examples- the theorems show that various non-CH objectives can cause harm under a distribution shift in the data. The paper also illustrates its approach by comparison on a real data example involving dose responses for a drug. The paper considers problems which are practically important. It is clear, interesting, and will likely generate a lot of discussion. A strength of its approach is the direct focus on things that truly matter, like utility, but a corresponding weakness is that much of the relevant quantities cannot be observed and require strong modeling assumptions, like the knowledge of an SCM. Note that this is not a specific weakness of the current paper, but one it shares with others that have been published in comparable conferences (including some of the cited related literature). Q1: Is it really necessary to focus on counterfactuals to define an adequate notion of harm? What if we replace the counterfactual distribution in formula (6) with an interventional distribution? I believe the resulting definition would still be interesting and usable for many of the examples considered. An “interventional comparative account” of harm could say that an action will cause harm to a person if that person will (probably) be better off if a different action is chosen. Since many of the examples in the paper consider actions that may actually be feasible this definition may suffice for them. By contrast, the definition of counterfactual fairness (for example) considers attributes which cannot feasibly be changed, hence it is necessarily counterfactual and could not be interventional.\n\nQ2: Is it too narrow to operationalize harm in a (simple) SCM framework? The model focuses on specific variables, while our intuition about harm usually involves a more open and wholistic idea of the world. Consider the two thieves example in Appendix D. The model specifies that Alice can only choose to rob or not rob Bob, while in the real world Alice could choose to simply warn Bob about Eve. Why should we apply the path-specific definition of harm that ignores an important way that Alice’s action can impact Bob’s wellbeing? I don’t think this resolves the paradox, but expanding the action space can. Similarly, a policy optimized to avoid harm according to a model built on one set of variables could cause harm involving other variables which were not included in the model. I think this is not just a curious theoretical issue because in algorithmic systems the model assumptions can be become built in and the data infrastructure can grow and increase in complexity, possibly cementing in place a harmful set of assumptions. However, the proposed narrow definition of harm would not address such issues.\n\nQ3: Are the authors certain about the claims in the paper regarding the \"first\" statistical definition of harm? It would require a truly massive literature search to establish this. It seems to me the contribution of the paper is interesting enough to speak for itself, without needing to be claimed as a first in kind. Some already highlighted under the weaknesses, like reliance on untestable assumptions and use of unobservable constructs, and most importantly the first two questions listed above.", " This paper considers defining harm as a counterfactual quantity involving user’s utility. The harm is defined based on the expected difference in the utility following the policy and the utility following the default action. Equipped with the definition of harm, the authors proposed harm penalized utility [HPU] and relate to its implications to decision making. Authors investigated a few properties of the definition(s) and provide a proof-of-concept simulation result in the context of dose-response showing that a policy can be optimized to lower harm depending on the user specified hyper-parameter. \n I would like to thank the authors first for an intriguing notion for harm (and benefit).\n\nStrengths\n- As a causal inference researcher, seeing the definition of harm at the level of “counterfactual” not “intervention” is refreshing.\n\nWeaknesses\n- It is unclear why we should define harm and benefit separately by taking max(0, difference in utility)\n- Simply the P terms in Eq 6 and 7 are not identifiable not only from factual outcome but also from any experimental data. (Line 235–236)\n- For the same reason, the authors used an existing model not the data. What would be the implications of using a learned model? A learned model only reflects the factual outcome correctly and will estimate counterfactual harm incorrectly.\n- It is unclear how benefit or harm are defined in medical or related research.\n \n- In the example in the introduction, I understand that why doctors would favor Treatment 1 vs 2. This feels so much like a trolley problem where inaction can cause 5 people die and action (switching) can cause 1 people die. What’s your explanation? In HPU, U is -5, -1 and harm is -1. So with lambda 4, there is no difference between switching lanes?\n- Def 1, why should noise be mutually independent?\n- Is utility fixed over the entire population? A is the agent's action but U is user's utility function. User != Agent. (Y would also be user's outcome...) \n- What's the practical importance of Y in defining harm and benefit when you have utility U? Y exists only to be marginalized out. For decision making, how would you use Y? where Y is not observed for the current user and X is given... Please see the weaknesses especially about the non-identifiability of the measure. One may show the applicability of new definition under a learned model, etc.\n=======\nI raise my score which now reflects better understanding of the paper.", " This paper rigorously define the concept of \"counterfactual harm\" using statistical analysis. Overall, the authors quantify the \"benefit\" & \"harm\" based on a common definition of counterfactual comparative account (CCA). With the mathematical formulation of counterfactual harm, the authors further derive a family of counterfactual objective functions for mitigation. Solid theoretical analysis are provided with the structural causal modeling, and an empirical example illustration is shown to demonstrate the effectiveness of the proposed quantification of counterfactual harm. Strengths:\n\n- The topic of this paper is important and significant, which can largely benefit the areas such as fairness AI & ethical AI. The provided quantification method is general, so many down-stream application scenarios could refer. \n\n- The equations in the main body of the paper is well introduced and clarified, which is easy to follow. \n\n- The authors provide many contexts and background information which is very useful for readers. The provided high-level examples can help quickly extract the main idea of this paper. \n\nWeakness:\n\n- The overall paper is not well structured. The section titles are kind of unclear. \n\n- The analysis setups are too specific, even though the harm definition is general by itself. This could be limited when applying such quantification scheme to a general ML settings. \n\n- The paper only provides a simple illustration for demonstration, without any real-world experiments & comparison. This makes the proposed work hard to be fully evaluated regarding to its overall performance. 1. I'm wondering how we can apply this definition (harm quantification) to a general ML setting, so as to mitigate some potential biases learned in models? Is the SCM a necessary setting for incorporating the proposed family of objective functions? \n\n2. In a more specific setting (e.g., fairness domain), can we equate or connect the harm quantification to some existing measurements (e.g., counterfactual fairness - Kusner, Matt J., et al. \"Counterfactual fairness.\" NeurIPS'17)? If yes, then what is the key difference for the proposed harm quantification? \n\n3. Is it possible to design some validation experiments on real-world data for this work? 1. The major theoretical analysis & results are purely based on the statistical causal analysis, which may not fit in the settings of standard ML scenarios. Personally, I think the proposed work could be limited since it is hard to directly employ such harm quantification to mitigate the \"harmful\" patterns existing in commonly used models, such as neural nets. \n\n2. The effectiveness of this work on wider domains still needs further validation, especially with the experiments on real-world datasets. Current illustration part could be limited to fully evaluate the proposed harm measurements. \n\n3. The authors should conduct a more detailed discussion between this work and other existing frameworks for quantifying \"harmful\" concepts (e.g., fairness, privacy). It seems like the authors try to make the definition from a higher level, but we readers/researchers also need to know what is the generality brought by this paper. In the current version, this part is quite limited in my view. ", " The paper argues that harm, evasion of which is key for AI to be safe and ethical when deployed, is intrinsically a counterfactual quantity. Further, that conventional ML is guaranteed to end up with harmful policies if unable to perform counterfactual reasoning. As contribution, it claims to be the first to produce a statistical definition of harm, and have derived a set of counterfactual objective functions to robustly mitigate harm. Finally, using a statistical model for indentifying optimal drug doses, the paper demonstrates its framework that integrates harm into algorithmic decisions can dentify doses that are significantly less harmful while not sacrificing efficacy whilst standard algorithms yield significant harm. The paper is very well written. The social significance of the research in general, together with its arguments, research questions and formal elucidations, are well positioned and articulated.\n\nHowever, inherent in formal frameworks and models is to abstract or simplify potentially required strong ties to more complex real-world contexts whilst claim generalizability. While a generalized additive model was used to illustrate the counterfactual framework for dose-response predictions, how to scale up to a rather complex domain (e.g., hazardous environment in which human and AI teamp up to navigate the situation) is worth discussing. Further, it is unclear how the causal constructs elucidated here accounted for or will account for human feedback, e.g., Alice's preferences, clinician's sense of values or human demonstrations in general (objective functions, which for example provably beneficial AI methods account for). While it is not a question for me that this is the first to provide a statistical defintion of harm and render it a counterfactual framework, the discussion on related works seems to miss the built up on why the current literature has not addressed it until this time, whether previous works have sought or alluded this reasearch direction and might be falling short, and how close (or far) current deep learning frameworks on counterfactual reasoning relate to this work. \n\n \n\n If indeed tha paper elucidated or proved that the proposed counterfactual-based harm-aversed framework can account for more complex harmful situations, it would be good for the authors to make this explicit. The paper also showed how outcome-dependent optimal policy learning happens within the framework. How about the framework being able to learn amid contextual, P(x; M), changes? In otherwords, when distribution shift is not just on the outcome distribution, but also on the input (e.g., covariate shifts) or to both (concept drift)? Lastly, some support (proof) to qualify certain claims (assumptions), as follows:\n1. The claim that agents trained to maximize factual objective functions are guaranteed to proceed with harmful policies, albeit they are allowed to retrain, amid distributional shifts (L332~L334). For one, Scholkopf et al. [1] argues that change in data distribution \"may lead to (arbitrarily) in accurate predictions\". Thus, this claim should be situated on the premise that the characterized objective function is pertinent only to domains where there is vulnerability to harm. Secondly, however, why would this claim hold even if the agent can retrain itself?\n2. Why was the requirement that agents have some fixed harm aversion relaxed, and tapered to requiring only that the agents do not take needlessly harmful actions? On the two brief mentions on limitation, namely, (1) that it remains \"an open question as to how counterfactual reasoning can be achieved with current implementations\", and (2) \"there is growing interest in counterfactual reasoning with deep learning models\", it would help to discuss this further in a section on related works." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "LUyrUYxzhTZ", "hYAVOQvTprP", "TKVBR-FIhv4", "cU2s9_PoWK8", "LHJtfKOkRtD", "QcBQgwuj01I", "GGwBj-monES", "8DFXgo6ftK", "9KGFy-sFHM", "T0tGjTj2gc", "pjw8uJ_sTuy", "nips_2022_zkQho-Jxky9", "ZpYJYK4E3d", "ay5fgXm0UKn", "Cs0Kd5nU25", "x0qwVvmjqa-", "msGhKUDcGg6", "nips_2022_zkQho-Jxky9", "Xu4C3ysVlZb", "Lg0U1xPK_hL", "9KGFy-sFHM", "2nY7eeLywlc", "HvwfbZFrdfG", "nips_2022_zkQho-Jxky9", "nips_2022_zkQho-Jxky9", "nips_2022_zkQho-Jxky9", "nips_2022_zkQho-Jxky9", "nips_2022_zkQho-Jxky9" ]
nips_2022_DDEwoD608_l
Hand-Object Interaction Image Generation
In this work, we are dedicated to a new task, i.e., hand-object interaction image generation, which aims to conditionally generate the hand-object image under the given hand, object and their interaction status. This task is challenging and research-worthy in many potential application scenarios, such as AR/VR games and online shopping, etc. To address this problem, we propose a novel HOGAN framework, which utilizes the expressive model-aware hand-object representation and leverages its inherent topology to build the unified surface space. In this space, we explicitly consider the complex self- and mutual occlusion during interaction. During final image synthesis, we consider different characteristics of hand and object and generate the target image in a split-and-combine manner. For evaluation, we build a comprehensive protocol to access both the fidelity and structure preservation of the generated image. Extensive experiments on two large-scale datasets, i.e., HO3Dv3 and DexYCB, demonstrate the effectiveness and superiority of our framework both quantitatively and qualitatively. The code will be available at https://github.com/play-with-HOI-generation/HOIG.
Accept
On the surface, this paper seems to be split between three borderline rejects (4) and one strong champion of the paper (10). However, this is not the full story, since two of the reject-inclined reviewers, Bc9p and zW9y did not participate post-rebuttal, despite multiple prods from the AC. The AC examined the stated weaknesses from Bc9p and zW9y, the authors' response to them, as well as the paper. The AC does not think that the concerns are paper stopping if they were left unaddressed (e.g., a glaring experimental weakness, an incorrect statement) and moreover finds the authors' response to these concerns satisfactory. The AC is then left with the reviews from p4XG (4) and aYmL (10). The remaining primary concern comes from p4XG, who points out that the method has a lot of inputs, which makes the problem considerably easier. The AC understands this concern (and thinks lowering the input requirements would be a great next step), but on balance is inclined to accept the paper. This is motivated by the quality of the results as well as aYmL's enthusiasm for the work. The AC would encourage the authors to use the extra page to give clear responses to the reviewers' questions (e.g., what are the applications, why not just do a direct render) in the final version of the paper. Others will have similar questions.
train
[ "0GT7KBLx9C_", "HTuo1Ay92e", "OVxPlcWr89", "Y3KXfmCWlUh", "ydYn7I5QDe", "tUyURI-9A7", "XSY5ED5XxYbK", "xwe8EPvoq0P", "EJIh67NRlRN", "ugJ875jiFL6", "HsUIE-tfiZJ", "TwtFjpqLArG", "bwCSu03lCI_", "OSglgaIxkNW", "ozBistzqtda", "8icyokoPXhk", "DZoHGNHgaCK" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer p4XG,\n\nWe would like to thank again for your time and valuable comments.\nHopefully, our rebuttal and new submitted revision could properly address your concerns.\nWe look forward to your feedback and will appreciate it if you could upgrade your score.\n\nWish you a nice day.\n\nBest,\n\nAuthors of Paper 1135", " Dear reviewer zW9y,\n\nWe would like to thank again for your time and valuable comments.\nHopefully, our rebuttal and new submitted revision could properly address your concerns.\nWe look forward to your feedback and will appreciate it if you could upgrade your score.\n\nWish you a nice day.\n\nBest,\n\nAuthors of Paper 1135", " Dear reviewer Bc9p,\n\nWe would like to thank again for your time and valuable comments.\nHopefully, our rebuttal and new submitted revision could properly address your concerns.\nWe look forward to your feedback and will appreciate it if you could upgrade your score.\n\nWish you a nice day.\n \nBest,\n\nAuthors of Paper 1135", " Hi Reviewers,\n\nThe discussion period is closing soon. Please take a look at the responses from the authors. If you have further questions, please ask them now, since the authors will be unable to respond soon. It's substantially more productive, effective, and reasonable to have a quick back-and-forth with authors now than to raise additional questions or concerns post-discussion period that the authors are unable to address. \n\nThanks,\n\nAC", " Dear reviewer p4XG:\n\nWe sincerely appreciate you for the acknowledgement, the precious review time and valuable comments. \nWe have updated the new revision and highlight the changes in blue in the revision.\nWe hope to further discuss with you whether or not your concerns have been addressed. \nPlease let us know if you still have any unclear parts of our work.\n\nWish you a nice weekend.\n\nBest,\n\nAuthors of Paper 1135", " Dear reviewer zW9y:\n\nWe sincerely appreciate you for the precious review time and valuable comments. \nWe have provided corresponding responses, experiment results on bootstrapping the training of hand-object interaction analysis, and updated the new revision, which we believe have covered your concerns.\nFor convenience, we highlight the changes in blue in the revision.\nWe hope to further discuss with you whether or not your concerns have been addressed. \nPlease let us know if you still have any unclear parts of our work.\n\nWish you a nice weekend.\n\nBest,\n\nAuthors of Paper 1135", " Dear reviewer Bc9p:\n\nWe sincerely appreciate you for the precious review time and valuable comments. \nWe have provided corresponding responses and updated the new revision, which we believe have covered your concerns. \nFor convenience, we highlight the changes in blue in the revision.\nWe hope to further discuss with you whether or not your concerns have been addressed. \nPlease let us know if you still have any unclear parts of our work.\n\nWish you a nice weekend.\n\nBest,\n\nAuthors of Paper 1135", " We sincerely appreciate your valuable and constructive comments.\nOur detailed responses are listed below and we revise the manuscript accordingly.\n\n**Q1: Clarification on the necessity of current input information and challenges of leveraging the information in the adopted inputs.**\n\n**A1:** Logically, current inputs are the necessary conditions to tackle this challenging task and they can be easily obtained via the method like [1], [2].\nFurthermore, the baseline like MG2T + OBJ, which adopts similar inputs, does not produce satisfying results.\nTherefore, it is non-trivial to design a framework which can take advantage of these inputs and our technical contribution focuses on the designed methodology to tackle this task.\n\nFirstly, it is difficult to handle the complex self- and mutual occlusion during hand-object interaction.\nLinking the source 3D posture and 2D appearance under the context of two co-occurring interacting instances has not been attempted in previous generative methods.\nWe achieve it efficiently through building the unified surface map to explicitly consider their occlusion conditions and provide abundant topology information inherent in model for the following target image generation.\nLeveraging these information, the hand-object generator further considers different characteristics of hand and object and generates the target image in a split-and-combine manner.\n\n[1] Shreyas Hampali et al. \"Honnotate: A method for 3D annotation of hand and object poses.\" CVPR 2020.\n\n[2] Zhe Cao, et al. \"Reconstructing hand-object interactions in the wild.\" ICCV 2021.\n\n**Q2: Clarification on the comparison baseline of GestureGAN.**\n\n**A2:** Since our explored new task contains no baselines, we have to modify the methods from most related single-hand generation.\nGestureGAN and MG2T are two only existing representative methods in signle-hand generation needing to compare.\n1\\) Based on GestureGAN, we clarify our modification on it in Line 208-212.\nIt utilizes sparse 2D hand keypoint with inherent ambiguity and produces inferior generation results, which indicates the necessity of our adopted input.\n2\\) We also compare with MG2T + OBJ, which adopts similar inputs with ours.\nOur method achieves better performance over it, which validates the effectiveness of our designed methodology for this task.\n\n\n**Q3: Clarification on our technical contribution.**\n\n**A3:**\n1\\) Notably, one of the main contributions in our work is the first to study the new HOIG task, which contains board research and application value.\nThe exploration on the new task is non-trivial, which includes the problem formulation, clarification of the task value, comprehensive baselines and evaluation methodology.\nThis contribution is the cornerstone of our work and should be counted as novelty.\n\n2\\) Under the new task formulation, we propose the HOGAN framework and its technical contribution focuses on the proposed methodology to tackle the main challenges of the HOIG task.\na) Specifically, we propose occlusion-aware topology modeling to resolve the complex self- and mutual occlusion during interaction.\nb) During the synthesis stage, we consider different characteristics of hand and object and generate the target image in a split-and-combine manner.\nNotably, utilizing the existing building block is just the tool and not claimed as our contributions.\nWe are not aiming at building new building blocks in the work.\n\n3\\) For this new HOIG task, we build comprehensive baselines and multi-perspective evaluation metrics.\nUnder the evaluation metrics, extensive experiments validate the effectiveness and superiority of our method over baselines both quantitatively and qualitatively.\nOur methodology is carefully designed with explicitly considering the characteristics of this new task.\nCompared with baselines, our framework generates images without blurry, texture aliasing, and false hand-object interaction, as shown in Figure 3.\n\n4\\) For this new HOIG task, we also explore plentiful applications, *i.e.*, object texture editing, real-world hand-object generation and data augmentation for HOPE (in revision).\nThese applications further enrich the value of our method and proposed task.", " We sincerely appreciate your valuable and constructive comments.\nOur detailed responses are listed below and we revise the manuscript accordingly.\n\n**Q1: Clarification on the application and demonstration of hand-object interaction, and difference with human-object interaction.**\n\n**A1:** Hand-object interaction is quite different from human-object interaction in multiple aspects, especially granularity (scale).\nHuman-object usually focuses on more holistic interaction between the body and the object, *e.g.* bicycle, skateboard, suitcase and *etc.*\nIn contrast, hand-object usually focuses on the fine-grained interaction inside the view between hand and object like phone, pencil, bottle, and *etc.*\nThis task is also of great importance and contains application scenarios like AR/VR and online shopping.\n\nFor example, when a consumer is shopping online, interaction visualization will give him/her an immersive experience.\nFurthermore, if consumers want some customization on the object, *e.g.* adding the name on the phone, our provided application can achieve this through object texture editing, as shown in Figure 4(a).\nBesides, in the online shopping scenario, consumers usually do not have the object.\nThey only need to upload a picture of their hand, and we can give them a real interaction experience via generating hand-object images with their hand identity preserved, as shown in Figure 4(b).\n\n\n**Q2: More application on utilizing synthesized data to bootstrap the training of hand-object interaction analysis.**\n\n**A2:** We have cited your mentioned paper and added another application, *i.e.,* utilizing synthesized data to boost the performance of hand-object pose estimation.\nAs shown in Table 1, we adopt the backbone in [21].\n\"Baseline\" denotes directly training the backbone on HO3D dataset, while \"Baseline + Aug\" represents the method training on both HO3D and our synthesized data.\nIt can be observed that \"Baseline + Aug\" outperforms the baseline method under all metrics, especially in the object pose.\n\nTable 1. Application on bootstrapping the training of hand-object interaction analysis.\n\n| Method | Hand AUC | Object Avg\\. ADD\\-0\\.1D |\n|:-----------------:|:----------:|:-------------------------:|\n| Baseline | 77\\.2 | 67\\.6 |\n| Baseline \\+ Aug | **78\\.0** | **76\\.8** |\n\n**Q3: Clarification on the input issue and robustness analysis.**\n\n**A3:** The expected input of our framework is easy to obtain automatically through the method like [1].\nTo demonstrate the robustness of each approach, we add noise on the input pose of each method to mimic the case of pose misalignment and evaluate its impact on the generation quality.\nIn practice, we perturb the pose by adding random noise with the range of 30\\% of its magnitude.\nAs shown in Table 2, our method demonstrates robustness on pose misalignment, *i.e.*, only +0.6 FID and -0.004 LPIPS due to perturbation, which exhibits more robustness than MG2T + OBJ.\nGestureGAN + OBJ still produces very blurry results.\nAlthough the perturbation affects its generated images, the performance keep relatively unchanged.\nCompared with them, our method still achieves the best performance after noise perturbation.\n\n[1] Shreyas Hampali et al. \"Honnotate: A method for 3D annotation of hand and object poses.\" CVPR 2020.\n\nTable 2. Comparison with baselines on pose information w/ and w/o perturbation.\n\n| Method | w/o | perturbation | w/ | perturbation |\n|:------:|:---:|:------------:|:--:|:------------:|\n| | FID | LPIPS | FID | LPIPS |\n| GestureGAN + OBJ | 82\\.0 | 0\\.316 | 82\\.1 | 0\\.316 |\n| MG2T + OBJ | 45\\.6 | 0\\.214 | 48\\.9 | 0\\.219 |\n| HOGAN | **41\\.9** | **0\\.172** | **42\\.5** | **0\\.176**|", " **Q9: Results with darker color hands and whether the framework is agnostic to hand texture, size, and color.**\n\n**A9:** We have tested the darker color hand and the images are shown in the [here](https://anonymous.4open.science/r/HOIG-2F04/assets/dark_hand.png).\nIt can be observed that the generated images still exhibit consistent structure with the target posture and preserve the identity in the source image.\nOur framework is agnostic to the hand texture, size, and color.\nWe test our framework on the real-world hand image in Figure 4 (b).\nAs shown in the highlighted red box, the generated image maintains the hand identity well.\nHowever, MANO only models the hand itself, without modeling the ring attached to it and we are not able to access the whole ring information via the source image.\nTherefore, if the finger wears the ring, it is hard to reconstruct the ring in the target pose.\n\n**Q10: Response to the \"Suggestions\" section.**\n\n**A10:** We will correct them following your suggestions. The VGG is implemented as VGG-19.\n\n**Q11: Clarification on the following open source.**\n\n**A11:** Due to the double-blind reviewing policy, current source code is shown in the anonymously open-sourced Github with the link [https://anonymous.4open.science/r/HOIG-2F04/](https://anonymous.4open.science/r/HOIG-2F04/).\nWe will release our modified baselines, our proposed framework, and their pre-trained model checkpoints publicly to Github once our paper gets accepted.", " We sincerely appreciate your valuable and constructive comments.\nOur detailed responses are listed below and we revise the manuscript accordingly.\n\n**Q1: Importance of synthetic dataset creation and its application in deep learning.**\n\n**A1:** Based on our explored task, it is also of great importance for synthetic data creation, which contains the potential application to boost the performance on hand-object interaction pose estimation (HOPE).\nCurrent HOPE methods are usually deep-learning-based, but their performances are limited by the size of training data due to the annotation cost.\nOne way to fertilize the model is to utilize synthetic data.\nOur framework is also ready to generate images, which model real-world characteristics well and boost HOPE performance.\nWe have verified the effectiveness of synthetic dataset creation in the aspect of boosting HOPE performance as shown in Table 1.\nAs shown in Table 1, we adopt the backbone in [21].\n\"Baseline\" denotes directly training the backbone on HO3D dataset, while \"Baseline + Aug\" represents the method training on both HO3D and our synthesized data.\nIt can be observed that \"Baseline + Aug\" outperforms the baseline method under all metrics, especially in object localization.\n\nTable. 1 Application on bootstrapping the training of hand-object interaction analysis.\n\n| Method | Hand AUC | Object Avg\\. ADD\\-0\\.1D |\n|:-----------------:|:----------:|:-------------------------:|\n| Baseline | 77\\.2 | 67\\.6 |\n| Baseline \\+ Aug | **78\\.0** | **76\\.8** |\n\n\n**Q2: Mechanism for hyper-parameter tuning.**\n\n**A2:** We choose the hyper-parameter by our experience.\nFor the number of epochs, we keep it consistent among all methods for fair comparison.\nGrid search may bring better performance.\n\n\n**Q3: Clarification on the details of qualitative analysis.**\n\n**A3:** Our IRB application has been granted by our institution.\nDue to the double-blind reviewing policy, we will release the approval once our paper gets accepted.\nThe qualitative study is outsourced to the hired participants, without any author of this paper.\nWe hire the participants in person and we randomly hire the participants in our college.\nWe have estimated the hourly wage in our local region and have paid them a gift of equal value.\n\n\n**Q4: How the model applies to non-rigid object interaction.**\n\n**A4:** Since the available object models are rigid, it is hard to model the non-rigid object interaction in the current form.\nAs future work, if we can get access to the object model with the non-rigid modeling capability, which contains the parameters depicting the non-rigid deformation, we can further explore our model to be applied in this scenario.\n\n\n**Q5: How to expand the model to hand-object interaction without an initial source.**\n\n**A5:** The initial source is an important information for our framework, which provides the appearance of hand for the generated images.\nA possible approach to expand the model without an initial source is to use pre-defined hand surface as texture information to generate images.\nHowever, this extension may need a higher cost, since it needs to collect the realistic hand texture.\n\n\n**Q6: How the model performs in case of non-symmetric objects vs symmetric objects.**\n\n**A6:** Our model does not contains the assumption on the symmetric characteristic of the object.\nTherefore, our model performs fairly well regardless of whether the object is symmetric.\nWe visualize some samples in [here](https://anonymous.4open.science/r/HOIG-2F04/assets/symmetric.png).\n\n**Q7: Whether the model performs greatly independently of the object size.**\n\n**A7:** Since our model does not contain the assumption on the object size, our model performs consistently well independently of the object size.\nThe drill and banana exhibit large differences in object size and appearance. \nAs shown in Figure 3, the hand-drill and hand-banana images generated by our model are consistent with their ground-truth images.\n\n**Q8: Clarification on the details on the inpainted background branch.**\n\n**A8:** Since MANO only models the hand without the arm and there are no indicators on the arm, it is hard to remove the entire arm part in the inpainted background.\nWe will leave it as future work.", " **Q6: Clarification on the training of the hand-object generator.**\n\n**A6:** The hand-object generator is trained in an end-to-end manner and all the networks are being trained simultaneously.\n\n**Q7: Clarification on the framework exploring the hand-object interaction.**\n\n**A7:** In the \"occlusion-aware topology modeling\" stage, we explicitly explore hand-object interaction.\nSpecifically, we build the unified surface space, and consider the complex self- and mutual occlusion during hand-object interaction.\nWith this stage, the complex relationship between hand and object is disentangled and the visible parts of hand and object are synthesized aligning the target image plane, respectively.\nConsidering the different characteristics of hand and object, the hand-object generator further produces the final target image in a split-and-combine manner. ", " We sincerely appreciate your valuable and constructive comments.\nOur detailed responses are listed below and we revise the manuscript accordingly.\n\n\n**Q1: Clarification on utilizing the photo-realistic rendering-related algorithm to solve this problem and comparison with the photo-realistic rendering algorithm.**\n\n**A1:** \n1\\) For photo-realistic rendering, one main procedure is to get the reliable texture.\nHowever, the source image only contains the partial hand texture.\nThat is to say, if we extract the hand texture from the source image and render the hand-object image under the guidance of the target posture, the hand part of the generated image is incomplete.\nTherefore, the direct rendering method is not applicable.\n\n2\\) As suggested, after applying a rendering algorithm, we can further improve the photo-realism of the synthetic image using a style transfer method.\nActually, our framework has integrated this idea.\nIn the \"occlusion-aware topology modeling\" stage, the extracted visible hand texture in the source image and pre-stored object texture are rendered to the target image plane as the coarse disentangled hand-object image (\"Hand Input\" and \"Object Input\"), as shown in the lower-left corner of Figure 2.\nDuring the \"hand-object generator\" stage, these coarse images are further refined for the target hand-object image with their different characteristics considered.\n\n3\\) We have compared with a photo-realistic rendering-related method, *i.e.*, MG2T + OBJ.\nIn this method, the rendered hand and object images are further refined to produce the hand-object translation result.\nThe method details are stated in Line 213-216.\nOur method outperforms it under all metrics, which demonstrates the necessity and effectiveness of our framework design.\n\n\n**Q2: Clarification on the technical contribution.**\n\n**A2:** \n1\\) Notably, one of the main contributions in our work is the first to study the new HOIG task, which contains board research and application value.\nThe exploration on the new task is non-trivial, which includes the problem formulation, clarification of the task value, comprehensive baselines and evaluation methodology.\nThis contribution is the cornerstone of our work and should be counted as novelty.\n\n2\\) Under the new task formulation, we propose the HOGAN framework and its technical contribution focuses on the proposed methodology to tackle the main challenges of the HOIG task.\na) Specifically, we propose occlusion-aware topology modeling to resolve the complex self- and mutual occlusion during interaction.\nb) During the synthesis stage, we consider different characteristics of hand and object and generate the target image in a split-and-combine manner.\nNotably, utilizing the existing building block is just the tool and not claimed as our contributions.\nWe are not aiming at building new building blocks in the work.\n\n3\\) For this new HOIG task, we build comprehensive baselines and multi-perspective evaluation metrics.\nUnder the evaluation metrics, extensive experiments validate the effectiveness and superiority of our method over baselines both quantitatively and qualitatively.\nOur methodology is carefully designed with explicitly considering the characteristics of this new task.\nCompared with baselines, our framework generates images without blurry, texture aliasing, and false hand-object interaction, as shown in Figure 3.\n\n4\\) For this new HOIG task, we also explore plentiful applications, *i.e.*, object texture editing, real-world hand-object generation and data augmentation for HOPE (in revision).\nThese applications further enrich the value of our method and proposed task.\n\n\n\n**Q3: Clarification on not using pixel-wise error as the evaluation metric.**\n\n**A3:** The MSE evaluation metric usually prefers the blurry result and is currently less utilized in generative problems.\nIn this work, we resort to the most widely-used metrics, *i.e.*, FID and LPIPS, in generative problems for the fidelity of the generated image.\nFID measures the distances between the real-image distribution and generated-image distribution.\nLPIPS is a weighted perceptual similarity between the generated image and the ground-truth image, which matches the human perception.\nThese two metrics are commonly adopted for evaluation in [1, 2, 3]. \n\n[1] Caroline Chan *et al.* \"Everybody Dance Now.\" ICCV 2019.\n\n[2] Tero Karras *et al.* \"Analyzing and Improving the Image Quality of StyleGAN.\" CVPR 2020. \n\n[3] Fabian Mentzer *et al.* \"High-Fidelity Generative Image Compression.\" NeurIPS 2020. \n\n**Q4: Clarification on U-Net.**\n\n**A4:** All U-Nets are trained from scratch.\n\n\n**Q5: Clarification on the fusion step.**\n\n**A5:** As mentioned in Line 152 - 159, the fusion step is implemented via learning the hand-object mask and the hand mask by two convolutional layers and fusing the three-stream results as Equation 7.", " The paper presents a framework to synthesize images with hand-object interactions using the 3D model of the object and the hand pose (also in 3D). The paper also claims to introduce a new task named hand-object interaction image generation. A set of experiments in two different datasets showed that the proposed framework outperformed constructed baselines in several metrics, including LPIPS, FID, AUC, PA-MPJPE, and ADD-01.D. - Strengths: \n - The paper is well-presented and easy to understand. No obvious typos.\n - The technical novelty looks incremental, but the experimental results and conclusions may be interesting for the community.\n - The proposed framework is well-validated, and the results showed clearly that it is superior to the baselines.\n\n- Weakness:\n - There are two major weaknesses. First, it seems that the proposed framework is too complex to solve the problem. Since it is provided a 3D model for the object and the hand indicating the pose of both, why not apply a photorealistic render algorithm and compose the image using the background of the input image?\n - The second weakness is the novelty of the technical solution feels incremental. Most of the building blocks of the proposed frameworks come from related work (e.g., UNets and SPADE). For instance, after computing a projection of the hand with the missing texture (occlusion-aware topology modeling component), the hand-object generator is basically applying a UNet to inpaint the hands and object texture and using SPADE to take into account the three-dimensional shape.\n - Another concern refers to the baselines and the evaluation metrics, which seem they are not challenging the proposed framework. For example, why not compare against a photorealist render algorithm and not use a pixel-wise error (e.g., MSE) using the ground-truth images to evaluate the quality of image generation?\n 1. Were all UNet trained from scratch or pre-trained using some dataset in an inpaint task?\n2. It is not clear the fusion step. Is it a learning layer? If so, is it an MLP?\n3. Is the hand-object generator training end-to-end? Are all the networks being training simultaneously, or was used a training-wise strategy?\n4. There are three independent branches in the network; therefore, it seems that each branch is unaware of the other, and the interaction between the hand and the object is not considered when training the model. How does the framework manage to explore the hand-object interaction to generate a more realistic image?\n5. At last, following my concern about the complexity and necessity of using a learning approach to solve the problem, why would not a good solution apply a rendering algorithm and after improving the photorealism of the synthetic image using a style transfer method? This kind of solution feels more practical and less complex than the proposed framework.\n Yes. The limitations were adequately addressed in the Limitations and Future Work section.", " The authors have investigated the task of hand-object interaction generation (HOIG) which is inverse problem of hand-object pose estimation (HOPE) problem. \n\nTarget image for hand, object, and the interaction between the two is created using a split-and-combine manner. \n\nHOGAN architecture based on conditional GANs is proposed for creating a unified space for hand and object. In their task, they have synthesized hand and object interaction conditioned on a target pose while preserving the appearance of the source image. This framework consists of background, object, and hand streams. Eventually, using a fusion model, all these streams are merged. \n\nThe hand-object interaction generation framework is finally tested on two large-scale datasets, HO3Dv3 and DexYCB that have annotated hand-object mesh representations. \n\nThe generated interaction hands and objects are finally evaluated both quantitatively and qualitatively under various methods for fidelity (e.g. FID), structure preservation (e.g. PA-MPJPE), as well as human subject evaluations.\n This research work is of great importance because its direct use in AR/VR applications such as augmented shopping or mixed-reality gaming experiences. \n\nTheir framework processes complex self- and mutual occlusion for both hand and object. For example, because hand is articulated, there is self-occlusion between the hand joints. \nBased on their claims, these authors are the first to study the HOIG task which counts as novelty. Their other aspect of novelty is propose of HOGAN architecture for conditional generation of interacting hands and objects.\n\nI think it’s very impressive that the correct texture is rendered for the object in different pose despite the inherent occlusion. \n\nThe comprehensive benchmarking and evaluation methodology followed by ablation studies by author is a very strong point in this paper. Further, given that the authors are the first to define this new task and there’s a lack of baselines, they did a great job of modifying the closest baselines to their needs and use them for comparison. \n\nNot really weakness, but adding a paragraph or two about importance of synthetic dataset (not just HOI dataset) creation and its application in deep learning would be to the benefit of this paper. \n\nWeakness: Provide mechanism for hyperparamater tuning of your framework hyperparamaters. Question 3.b answer states the author have told how they have chosen the hyperparameters but it is not provided.\n\nI think there might have been a need for IRB due to use of human subjects for qualitative analysis of produced HOI. However, in Question 5.b, the answer is N/A. I would say the qualitative study should be outsourced to others not the authors because if it is done by the authors, there is a lot of bias involved. Please provide some clearance on this. \n\nIt is not clear to me how the participants for the qualitative analysis are hired. Are they hired in-person or through an online crowdsourcing platform such as Amazon Turk? In either case, please provide details of subject recruiting. \n\n 1- How can your model apply to non-rigid object interaction?\n\n2- How would you expand your model to hand-object interaction without an initial source?\n\n3- How does your model perform in case of non-symmetric objects vs symmetric objects?\n\n4- Does your model perform greatly independently of the object size? For example, does it do a great job when interacting with a soccer ball rather than holding a toothbrush?\n\n5- In figure 2, Inpainted Background, why didn’t you remove the entire hand? Wouldn’t the remaining of hand cause you problem further down the line?\n\n6- Line 180: please state how you choose your hyperpatameters such as LR, number of epochs, etc. Did you perform any hyperparameter tuning (e.g. Grid Search or Random Search)? Please provide the mechanism for so as well as potential results of your hyperparameter tuning. \n\n7- Have you tested your model with darker color hands? How does your model perform in such cases? Is your framework agnostic to hand texture, size, and color? For example, if someone is wearing a ring on her finger, how does your model handle the reconstruction of the ring in the target pose?\n\n\nSuggestions:\n\n1- VR / AR → AR/VR\n\nLine 68: face reenactment and sign language production → face reenactment, and sign language production\n\nLine 171: please state which VGG, e.g. VGG-16 or VGG-19 in the writing\n\nLine 175: implementation details and evaluation metrics → implementation details, and evaluation metrics\n\n It is stated that the link code is provided but I cannot find it in the paper. Please provide the link to the source-code. It is also not clear to me if the code to modified baselines as well as HOGAN framework would be open-sourced in case the paper gets published. \n\nI do see the code in the supplementary materials however it is just not clear to me if the code will be released in an open-source fashion. Also, please let us if you would share the trained model checkpoints for other researchers publicly. \n\n\n", " This paper presents a new task of synthesizing photorealistic images of hand-object interaction. In addition, this work introduces a strong baseline method by conditioning an image-to-image translation network with hand/object topology as well as warped texture from the source image. The experiments show that the proposed approach outperforms the naive extention of existing approaches. Also the paper provides several applications such as texture editing and real hand texture transfer. The paper has the following strengths:\n- The paper introduces a novel task called hand-object image synthesis. As discussed in the paper, the existing literature mostly focus on scene analysis, not synthesis. \n- The proposed HOGAN presents a strong baseline for this newly introduced task. The experimental results also show that existing approaches are not suitable for this novel task. \n- The paper is well written and easy to follow.\n\nOn the other hand, this work has the following weaknesses:\n- While the paper constantly argues that HOIG is of broad interest to the community, the listed applications and demonstration in the paper are not convincing enough. In my view, there is no immediate applications in AR/VR or online shopping with texture editing or real-hand texture transfer only inside the small cropping of hand and object images. More immediate applications of this problem would be to use synthesized data to bootstrap the training of hand-object interaction analysis (e.g., pose estimation, object localization). I would highly recommend adding experiments to show that the presented task is useful for downstream analysis tasks similarly to [Shrivastava et al. 2017]. \n\nLearning from Simulated and Unsupervised Images through Adversarial Training\nAuthorsAshish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, Russ Webb\nCVPR 2017\n\n- Another issue of this work is the expected input seems unrealistic. More specifically, it remains challenging to obtain accurate hand and object poses in a single image. Noise perturbation in input poses would be great to have to assess the robustness of each approach in the presence of pose misalignment.\n\nWhile the paper presents a novel task and its strong baseline method, its significant remains unclear with the current state. As I mentioned earlier, providing more convincing use cases would make the paper stronger. Thus, I would recommend resubmitting. Related to the weaknesses above, if AR/VR, and online shopping are the primary applications of HOIG, why not proposing the task of “human”-object interaction image synthesis instead of hand-object? I can see the immediate applications in those domains if it were full-body, but not clear with only hands unless it’s used for bootstrapping pose estimation tasks. The paper discusses its limitations and societal impact. ", " In this paper, the authors propose a new task, i.e., hand-object interaction image generation. The proposed method uses occlusion-aware topology modeling, which captures the occlusion relationship between objects and hands, and warps the unoccluded region. The warped hand input and object input, together with their topologies, are fed into the hand-object generator. The hand-object generator deals with the hand and object generation separately.\n\nThe contributions lie in that 1) the paper proposes a new task; 2) the proposed method generates plausible results for this new task. Strengths:\n1) The proposed method is intuitive and achieves plausible results on hand-object interaction generation.\n2) The paper is well-written and easy to follow.\n\n\nWeaknesses:\n1) The proposed method takes too much information containing rich priors as inputs, which greatly eases the tasks. Specifically, with the source posture, it is easy to model the occlusion between hands and objects. With the source posture and target posture, the only thing the network needs to learn is to inpaint the warped images. As for the object generation, with the object model, the network only needs to adjust the lighting of the object. Admittedly, the task itself is challenging as suggested by the author. However, with these complicated inputs, the task is easy to solve. In my understanding, the authors tackle the problem by feeding more inputs rather than solving the problem in the technical aspect.\n2) The comparison with GestureGAN+OBJ is not fair. The input to the GestureGAN is only the source image and target pose represented by sparse keypoints. The input contains much less prior information compared to the proposed method. 1) Apart from proposing a new task, the author should further clarify their technical contribution.\n2) With the inputs containing rich prior information, the author should address the key technical challenges. In my understanding, with these inputs, the challenges mentioned in the paper are addressed. I wonder if there are any other technical challenges after using these complicated inputs. Please refer to my concerns about this issue in weakness (1). The authors have addressed the limitations. For example, the posture representation is the dense mesh representation, which is not easy to get in real-world applications." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 10, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "DZoHGNHgaCK", "8icyokoPXhk", "OSglgaIxkNW", "nips_2022_DDEwoD608_l", "DZoHGNHgaCK", "8icyokoPXhk", "OSglgaIxkNW", "DZoHGNHgaCK", "8icyokoPXhk", "HsUIE-tfiZJ", "ozBistzqtda", "bwCSu03lCI_", "OSglgaIxkNW", "nips_2022_DDEwoD608_l", "nips_2022_DDEwoD608_l", "nips_2022_DDEwoD608_l", "nips_2022_DDEwoD608_l" ]
nips_2022_8wtaJ9dE9Y2
Predicting Label Distribution from Multi-label Ranking
Label distribution can provide richer information about label polysemy than logical labels in multi-label learning. There are currently two strategies including LDL (label distribution learning) and LE (label enhancement) to predict label distributions. LDL requires experts to annotate instances with label distributions and learn a predictive mapping on such a training set. LE requires experts to annotate instances with logical labels and generates label distributions from them. However, LDL requires costly annotation, and the performance of the LE is unstable. In this paper, we study the problem of predicting label distribution from multi-label ranking which is a compromise w.r.t. annotation cost but has good guarantees for performance. On the one hand, we theoretically investigate the relation between multi-label ranking and label distribution. We define the notion of EAE (expected approximation error) to quantify the quality of an annotation, give the bounds of EAE for multi-label ranking, and derive the optimal range of label distribution corresponding to a particular multi-label ranking. On the other hand, we propose a framework of label distribution predicting from multi-label ranking via conditional Dirichlet mixtures. This framework integrates the processes of recovering and learning label distributions end-to-end and allows us to easily encode our knowledge about current tasks by a scoring function. Finally, we implement extensive experiments to validate our proposal.
Accept
This paper studies the problem of predicting label distribution from multi-label ranking. First, the authors give a theoretical analysis to prove the superiority of the multi-label ranking over the logical labels. Then an end-to-end framework called DRAM is proposed for recovering and learning label distributions. The corresponding experiments validate the effectiveness of the proposed algorithms. Overall, this work is technically solid. The concerns raised by reviewers are not so serious and have been answered. I recommend accepting this paper.
train
[ "_3wxVtiHYYdB", "CuGyNf5IJ7", "fhem4Jv8fO", "_d0otkqdRn", "SG9wdAsr77q", "IaIn2W2yIVI", "SNu_osq8WJ", "9-_wBdkaN6d" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your suggestions. Below we give the point-by-point responses to your questions.\n\n**Q1: It seems that the theoretical analysis has little relatedness with the proposed framework and there are not experiments to support the theoretical results. It is something that looks beautiful.**\n\nA: The theoretical part of our paper serves two purposes: (1) It helps us understand the advantages of multi-label ranking w.r.t. approximating the true label distribution. (2) The optimal range of $\\\\hat\\\\delta$ can help us construct the high-quality prior distribution of the label distribution, i.e., Eq. (7) in our paper.\n\n**Q2: The trade-off parameter $\\\\lambda$ is selected from a very wide range by five-fold cross-validation. Is there a good default choice? Parameter sensitivity should be made. Besides, five-fold cross-validation is conducted over the whole data set or only the 70% data for training?**\n\nA: (1) The parameter sensitivity is shown in the following table, where each entry denotes the performances on Cheb/Rho. The last column NSRD is the average performance on NSRD-e1, NSRD-e2 and NSRD-e3 datasets. It can be seen that $\\\\lambda=1$ is a good default choice for most datasets. (2) Five-fold cross-validation is conducted over the 70% data for training. Specifically, we first select the optimal hyperparameter on the training set (70% dataset) using five-fold cross-validation, then train the model on the training set using that hyperparameter, and finally report the performance on the test set (30% dataset).\n\n|$\\\\lambda$|Emotion6| Flickr-LDL|Twitter-LDL|Movie| NSRD|\n|-|-|-|-|-|-|\n|1.0E-05|0.304/0.486|0.352/0.488|0.372/0.454|0.124/0.720|0.539/0.154|\n|5.0E-05|0.304/0.484|0.353/0.488|0.355/0.481|0.124/0.719|0.543/0.150|\n|1.0E-04|0.304/0.486|0.353/0.487|0.347/0.493|0.124/0.719|0.549/0.152|\n|5.0E-04|0.304/0.484|0.352/0.488|0.322/0.524|0.125/0.717|0.553/0.150|\n|1.0E-03|0.304/0.484|0.352/0.489|0.313/0.538|0.126/0.716|0.555/0.084|\n|5.0E-03|0.301/0.494|0.351/0.492|0.303/0.556|0.129/0.712|0.549/0.202|\n|1.0E-02|0.298/0.502|0.346/0.501|0.303/0.563|0.130/0.707|0.556/0.295|\n|5.0E-02|0.284/0.542|0.323/0.546|0.307/0.575|0.141/0.651|0.580/0.259|\n|1.0E-01|0.281/0.564|0.312/0.588|0.310/0.579|0.147/0.649|0.570/0.311|\n|5.0E-01|0.282/0.588|0.318/0.620|0.319/0.589|0.164/0.648|0.554/0.418|\n|1.0E+00|0.283/0.581|0.324/0.627|0.326/0.591|0.171/0.648|0.553/0.426|\n|5.0E+00|0.299/0.571|0.352/0.620|0.343/0.599|0.179/0.648|0.550/0.438|\n|1.0E+01|0.309/0.555|0.364/0.615|0.355/0.604|0.181/0.648|0.546/0.439|\n|5.0E+01|0.328/0.527|0.396/0.603 | 0.366/0.601|0.182/0.648|0.524/0.453|\n\n**Q3: Why the general LDL data sets are not used in experiments. It is very easy to construct data sets with label rankings and with binary labels from LDL data set. The 15 LDL data sets release by Geng**\n\nA: Our criterion for selecting the datasets is that the label distributions of the datasets are generated by expert annotation, which is in line with the motivation of our paper. In terms of the 15 datasets published by Geng, the label distributions of Natural-Scene dataset are generated by the algorithm proposed in the paper [1]; the label distributions of Yeast and Human-Gene datasets are obtained from the biological laboratory equipments; s-JAFFE and SBU-3DFE datasets have label distributions with all description degree values greater than zero, which will produce rankings over the whole label set (i.e., label ranking) instead of multi-label rankings. Therefore, we only chosen the Movie dataset from Geng's 15 datasets. \n\n[1] Geng, X. and L. Luo. Multilabel Ranking with Inconsistent Rankers. CVPR (2014): 3742-3747.\n\n**Q4: Authors points out two limitations in Section 6. For the 2nd one, i.e., \"If several labels actually describe the instance to the same degree, then requiring experts to give the strict order of these labels may lead to errors and invalidate Theorem 1\", authors should extend the proposed framework to deal with such kind of data sets, not treat it as a future work. After all, it is more realistic that some labels have the same description degree.**\n\nA: We did not consider the situation of \"identical label discription degree\" in this paper because the analytic form of EAE and its corollaries are too complex to be shown together with the rest of the paper in a limited pages. What is more, this situation may only invalidate Theorem 1 and its corollaries. For our proposed DRAM framework, as stated in the conclusion of our paper, only minor modifications are needed to accommodate this situation. For example, given a label ranking $y_1<y_2=y_3$, where $y_i$ denotes the $i$-th label, when generating $\\\\boldsymbol z$ (i.e., line 5 of Algorithm 1), we only need to generate a random real-valued vector for $y_1<y_2$, say $[0.3, 0.5]$, and set the corresponding value of $y_3$ to the value of $y_2$, and end up with a vector $[0.3,0.5,0.5] $, and finally normalize this vector to obtain a label distribution.", " Many thanks for your comments, we have provided point-by-point responses to your questions below.\n\n**Q1: What is the time cost of the EM algorithm used to train model?** \n\nA: The complexity of the training procedure for DRAM is dominated by the E-step and M-step, and depends on the basic learner used. Each EM iteration in DRAM takes $O((LM+TP)KN)$, where $K$, is the number of Dirichlet components in the mixture, $N$ is the number of observations, $P$ is the number of learnable parameters of the used basic learner (for example, for DRAM+LN, $P$ is equal to the number of feature variables multiplied by the number of labels), $M$ is the number of labels, $L$ is the number of Monte Carlo samples, $T$ is the number of iterations required for the M-step to reach convergence.\n\n**Q2: what is the capability of the proposed framework on large datasets?**\n\nA: Most label ranking methods have high complexity due to the pairwise or listwise operations involved. In our paper, we avoid these operations (and thus avoiding the high time complexity) by converting label ranking into a random vector, and thus our framework also works on large-scale datasets.\n\n**Q3: Are there any selection criteria for scoring function? Note that only DRAM+LN is tested on all datasets and how do different scoring functions affect the results?**\n\nA: Since our paper is not concerned with designing a good scoring function, we follow the principle of \"the simpler the better\" and choose the non-informative scoring function. Of course, most existing LE assumptions can be used as a scoring function with simple modifications. For example, a common assumption of LE is that semantically similar labels are closer w.r.t. label description degree (label correlation assumption). We can define $\\\\phi(d)={(\\\\sum_{i,j}|(d_i-d_j)^2-s_{ij}|)^{-1}}$ as the scoring function to encode this assumption, where $s_{ij}$ is the normalized distance between the i-th and j-th column of permutation matrix. The experimental results of using this scoring function are as follows (all hyperparameters keep unchanged). Each entry in the table represents \"DRAM+LC/DRAM+LN\", where DRAM+LC is DRAM with the new scoring function.\n\n| Dataset| Cheb|Canber|Cosine|Rho|\n|:--|-|-|-|-|\n|Movie|0.123/0.124|1.055/1.058|0.934/0.932|0.722/0.720|\n|Emotion6|0.281/0.282|3.956/3.953|0.783/0.785 | 0.592/0.588 |\n|Twitter-LDL|0.355/0.355|6.525/6.526|0.828/0.828 | 0.601/0.604 |\n|Flickr-LDL|0.324/0.324|6.015/6.013|0.814/0.815 | 0.625/0.627 |\n|NSRD-e1|0.517/0.509|7.655/7.649|0.591/0.599 | 0.453/0.459 |\n|NSRD-e2|0.517/0.509|7.655/7.649|0.592/0.599 | 0.453/0.459 |\n|NSRD-e3|0.559/0.554|7.699/7.699|0.574/0.577 | 0.451/0.455 |\n\nIt can be seen that the new scoring functions have a small effect on results. For time reasons, the new scoring functions are only the simplest implementation of label correlation assumption; we believe that a well-designed scoring function can effectively improve the performance.\n\n**Q4: Multi-label ranking may still require a high cost. The authors can add some discussion on cost of various methods (the proposal, LE and LDL methods), for example, the cost comparison when achieving the same performance.**\n\nA: (1) From the empirical perspective, logical label annotating requires giving whether each label is relevant to the instance; multi-label ranking requires additionally giving the ranking of relevant labels; label distribution requires further answering the question of how preferable is one label to another label. Obviously, w.r.t. annotating cost, logical label is the smallest, multi-label ranking is moderate, and label distribution is the largest. As for the quantitative difference in these costs, we cannot give precise results because we did not record the exact time of annotating. Nevertheless, we can give a rough conclusion according to our experience in annotating the NSRD dataset. During creating the NSRD, the ratio of time we spent on annotating with logical label, label ranking, and label distribution was about 1:2:4.\n\n(2) From the theoretical perspective, label distribution annotating produces an EAE (expected approximation error) of zero. According to the EAE formulas for multi-label ranking and logical label (as shown in Eq. (2) and Eq. (5) in our paper), multi-label ranking and logical label can theoretically never achieve the same performance as label distribution annotation. Nevertheless, by observing the EAE at different values of m, we can get an idea of how many labels need to be annotated to reach a certain error value:\n\n|m|1|2|3|4|5| 6|7|8|9|10|\n|-|-|-|-|-|-|-|-|-|-|-|\n|$E_\\\\sigma$ |0.08| 0.07|0.067|0.065|0.063|0.062|0.061|0.06|0.06|0.06|\n|$E_l$| 0.097|0.215|0.366|0.524|0.686|0.85|1.014|1.179|1.344|1.51|\n\nwhere $E_{\\\\sigma}$ is the expected EAE defined in Corollary 3 of our paper, and ${E}_{ l}$ is the expected value of EAE for logical label when $\\\\delta\\\\sim\\\\text{Uni}(0,m^{-1})$ and $\\\\hat\\\\delta\\\\sim \\\\text{Uni}(0, m^{-1})$.", " Many thanks for your precious comments and corrections. we have provided point-by-point responses to your questions below.\n\n**Q1: In Definition 1, the author states that $z$ and $\\\\hat z$ are independent, but from Corollary 1 it can be seen that $\\\\delta$ and $\\\\hat \\\\delta$ are closely related, so the independence of $z$ and $\\\\hat z$ needs further explanation.**\n\nA: If we know the value of $\\\\delta$, then Corollary 1 can tell us what $p(\\\\hat z)$ minimizes the EAE, which means that $\\\\delta$ can affect $p(\\\\hat z)$. But, the independence of $z$ and $\\\\hat z$ is determined by $p(z,\\\\hat z)=p(z)p(\\\\hat z)$, and $\\\\delta$ clearly does not affect this equality. What is more, the value of $\\\\delta$ is actually unknown. As shown in line 133 of our paper, the specific value of $\\\\hat \\\\delta$ is taken from $[m(m+1)^{-2},m^{-1}]$, which is not directly related to $\\\\delta$.\n\n**Q2: It can be seen from Corollary 1 that the optimal solution of $\\\\hat\\\\delta^\\\\star=((2m+1)\\\\delta+m)(m+1)^{-2}$ does not approach $\\\\delta$, whether this shows the capability boundary of the method?** \n\nA: Yes, the bound of EAE is shown in Corollary 2 of our paper.\n\n**Q3: In addition, the process of theoretically guiding the algorithm is mainly reflected in the update of $\\\\hat \\\\delta_n=|\\\\sigma_n|(|\\\\sigma_n|+1)^{-2}$. From the Corollary 3, $\\\\hat\\\\delta\\\\in[m(m+1)^{-2}, m^{-1}]$, why does $\\\\delta$ take the lower bound of the interval instead of a certain value in the middle?**\n\nA: The parameter $\\\\hat\\\\delta$ can determine the range of label distribution (i.e., $\\\\mathcal S_{\\\\boldsymbol\\\\sigma}^{\\\\hat\\\\delta}$), and the volume of $\\\\mathcal S_{\\\\boldsymbol\\\\sigma}^{\\\\hat\\\\delta}$ equals ${(1-m\\\\hat\\\\delta)^m}{(m!)^{-1}}$ as shown in the line 14 of the Appendix. This means that the volume of the label distribution's space is smaller when $\\\\hat \\\\delta$ is larger. If the range of label distribution is too small, the label distribution will tend to be a regular step shape (e.g., $[0, 0.1,0.2,0.3,0.4]$), which will result in the scoring function being useless, and thus unable to portray the relative importance information among the labels. Therefore, we take the lower bound of $\\\\hat\\\\delta$.\n\n**Q4: When analyzing how the number of mixture components K affects the performance of the method, the authors point out that “it can be seen that the performance gets better first and then worse as K increases.”, it seems that such a conclusion cannot be simply drawn from the figure.**\n\nA: Many thanks for your correction. This conclusion is indeed somewhat arbitrary, and we consider revising it to \"It can be seen that appropriately increasing the Dirichlet components in the mixture can improve the model capacity and thus improve the predictive performance, but too many Dirichlet components may lead to overfitting and thus degrade the predictive performance.\".", " Thank you for your valuable comments, we have provided point-by-point responses to your questions below.\n\n**Q1: On the other hand, for me, the main weakness mainly lies in that the paper is not easy to follow. I understand that the authors have put necessary proofs and details to the supplementary material, but still the Sections 2 and 3 are not easily understandable.** \n\nA: We will make some necessary changes to make Sections 2 and 3 easier to understand.\n\n**Q2: Besides, I expect to see the results in terms of KL divergence because it is such an important measure metric for LDL.**\n\nA: The performance of the comparison algorithms on KL divergence is shown in the following table, where the notation meaning is consistent with Table 1 in our paper.\n\n| Method | Movie | Emotion6 | Twitter-LDL | Flickr-LDL |\n| :------- | :---------------------------- | :---------------------------- | :---------------------------- | :---------------------------- |\n| DRAM+LN | $(2)\\\\ 0.104\\\\pm0.002$ | $(1)\\\\ 0.488\\\\pm0.014$ | $(1)\\\\ 0.696\\\\pm0.025$ | $(1)\\\\ 0.623\\\\pm0.009$ |\n| DT+VI+SA | $(6)\\\\ 0.163\\\\pm0.002\\\\ \\\\bullet$ | $(3)\\\\ 0.586\\\\pm0.029\\\\ \\\\bullet$ | $(6)\\\\ 1.162\\\\pm0.009\\\\ \\\\bullet$ | $(6)\\\\ 0.986\\\\pm0.013\\\\ \\\\bullet$ |\n| DT+VI+DM | $(7)\\\\ 0.163\\\\pm0.002\\\\ \\\\bullet$ | $(2)\\\\ 0.571\\\\pm0.021\\\\ \\\\bullet$ | $(5)\\\\ 1.156\\\\pm0.014\\\\ \\\\bullet$ | $(5)\\\\ 0.982\\\\pm0.014\\\\ \\\\bullet$ |\n| DT+GL+SA | $(4)\\\\ 0.133\\\\pm0.001\\\\ \\\\bullet$ | $(5)\\\\ 0.696\\\\pm0.027\\\\ \\\\bullet$ | $(3)\\\\ 1.059\\\\pm0.004\\\\ \\\\bullet$ | $(3)\\\\ 0.933\\\\pm0.009\\\\ \\\\bullet$ |\n| DT+GL+DM | $(3)\\\\ 0.130\\\\pm0.001\\\\ \\\\bullet$ | $(4)\\\\ 0.653\\\\pm0.019\\\\ \\\\bullet$ | $(4)\\\\ 1.105\\\\pm0.005\\\\ \\\\bullet$ | $(4)\\\\ 0.960\\\\pm0.009\\\\ \\\\bullet$ |\n| GT+SA | $(5)\\\\ 0.161\\\\pm0.007\\\\ \\\\bullet$ | $(7)\\\\ 3.229\\\\pm0.218\\\\ \\\\bullet$ | $(7)\\\\ 1.983\\\\pm0.221\\\\ \\\\bullet$ | $(7)\\\\ 1.665\\\\pm0.100\\\\ \\\\bullet$ |\n| GT+DM | $(1)\\\\ 0.098\\\\pm0.002\\\\ \\\\circ$ | $(6)\\\\ 1.034\\\\pm0.049\\\\ \\\\bullet$ | $(2)\\\\ 0.912\\\\pm0.024\\\\ \\\\bullet$ | $(2)\\\\ 0.925\\\\pm0.014\\\\ \\\\bullet$ |\n\n| Method | NSRD-e1 | NSRD-e2 | NSRD-e3 |\n| :------- | :---------------------------- | :---------------------------- | :---------------------------- |\n| DRAM+LN | $(3)\\\\ 1.222\\\\pm0.018$ | $(3)\\\\ 1.219\\\\pm0.021$ | $(3)\\\\ 1.279\\\\pm0.035$ |\n| DT+VI+SA | $(7)\\\\ 1.582\\\\pm0.042\\\\ \\\\bullet$ | $(7)\\\\ 1.557\\\\pm0.051\\\\ \\\\bullet$ | $(7)\\\\ 1.599\\\\pm0.058\\\\ \\\\bullet$ |\n| DT+VI+DM | $(6)\\\\ 1.510\\\\pm0.037\\\\ \\\\bullet$ | $(6)\\\\ 1.488\\\\pm0.031\\\\ \\\\bullet$ | $(6)\\\\ 1.562\\\\pm0.038\\\\ \\\\bullet$ |\n| DT+GL+SA | $(5)\\\\ 1.436\\\\pm0.014\\\\ \\\\bullet$ | $(5)\\\\ 1.425\\\\pm0.012\\\\ \\\\bullet$ | $(5)\\\\ 1.486\\\\pm0.012\\\\ \\\\bullet$ |\n| DT+GL+DM | $(4)\\\\ 1.427\\\\pm0.016\\\\ \\\\bullet$ | $(4)\\\\ 1.419\\\\pm0.012\\\\ \\\\bullet$ | $(4)\\\\ 1.480\\\\pm0.014\\\\ \\\\bullet$ |\n| GT+SA | $(1)\\\\ 1.179\\\\pm0.037\\\\ \\\\circ$ | $(1)\\\\ 1.149\\\\pm0.044\\\\ \\\\circ$ | $(2)\\\\ 1.200\\\\pm0.053\\\\ \\\\circ$ |\n| GT+DM | $(2)\\\\ 1.190\\\\pm0.044\\\\ \\\\circ$ | $(2)\\\\ 1.187\\\\pm0.039\\\\ \\\\circ$ | $(1)\\\\ 1.187\\\\pm0.047\\\\ \\\\circ$ |\n\nBy observing this table above and Table 1 in our paper, it can be found that the experimental results on KL divergence are very similar to those on the Cheb metric. Therefore, in consideration of page limitation, we do not show the KL divergence.", " This paper studies Label Enhancement (LE). Instead of enhancing label distribution from logical labels, it proposes to recovery label distribution from multi-label ranking annotations. To achieve that, the authors establish several theories and put forward a new method. Experimental results validate the advantages of recovering label distribution from ranking over logical labels. On one hand, the strengths include:\n+ LE with ranking annotation is original, which hasn't been noticed in the field of LE. Besides, the authors theoretically prove that LE from ranking is better than LE from logical labels, as shown in corollary 4.\n+ The method DRAM is novel. DRAM uses the Dirichlet mixtures and EM algorithm to recover label distribution from rankings. \n+ Most importantly, the experiments show that DRAM has remarkable improvements over existing LE (from logical labels). Moreover, the DRAM+LN combination even outperforms SA and DM with the ground-truth label distributions, which is impressive. \n\nOn the other hand, for me, the main weakness mainly lies in that the paper is not easy to follow. I understand that the authors have put necessary proofs and details to the supplementary material, but still the Sections 2 and 3 are not easily understandable. Besides, I expect to see the results in terms of KL divergence because it is such an important measure metric for LDL. I expect to see the results in terms of KL divergence in the rebuttal. I believe the authors have clarified the limitations of their paper. ", " This article propose a generic framework named DRAM(label Distribution predicting from\nmulti-label RAnking via conditional Dirichlet Mixtures). It allows to flexibly encode the prior knowledge about the tasks by a scoring function, and it integrates the processes of recovering and learning label distributions end-to-end. Besides, the author theoretically investigate the relation between multi-label ranking and label distribution and define the notion of EAE to quantify the quality of an annotation, and give the bounds of EAE for multi-label ranking.\n Generally speaking, the article has a certain degree of innovation and the theoretical results are plentiful. The structure of the article is complete and the thinking is clear. Therefore, we hold the opinion of weak reception.\n1. In Definition 1, the author states that $z$ and $\\hat z$ are independent, but from Corollary 1 it can be seen that ${\\delta }$ and ${\\hat \\delta }$ are closely related, so the independence of $z$ and $\\hat z$ needs further explanation.\n\n 2. It can be seen from Corollary 1 that the optimal solution of ${\\hat \\delta ^*} = ((2m + 1)\\delta + m){(m + 1)^{ - 2}}$ does not approach ${\\delta }$, whether this shows the capability boundary of the method? In addition, the process of theoretically guiding the algorithm is mainly reflected in the update of ${{\\hat \\delta } _n} = \\left| {{\\sigma _n}} \\right|{(\\left| {{\\sigma _n}} \\right| + 1)^{ - 2}}$. From the Corollary 3, ${\\hat \\delta } \\in [m{(m + 1)^{ - 2}},{m^{ - 1}}]$, why does ${\\delta }$ take the lower bound of the interval instead of a certain value in the middle ?\n 3. When analyzing how the number of mixture components K affects the performance of the method, the authors point out that “t can be seen that the performance gets better first and then worse as K increases.”, it seems that such a conclusion cannot be simply drawn from the figure.", " This paper studies how to exploit multi-label ranking, a compromise w.r.t. annotation cost, to learn a predictive model on label distribution. The authors theoretically investigate the relation between multi-label ranking and label distribution and demonstrate its superior over the logical labels. Then a general framework involving custom knowledge is proposed to recover and learn label distribution end-to-end. The extensive experiments show the effectiveness of the proposal. This is the first paper that proposes to predict label distribution from the view of multi-label ranking and shows the good guarantees for performance. The originality is good and the main idea of this paper is clearly presented. They also show better results than other compared methods. Despite that, I still have some concerns:\n1. What is the time cost of the EM algorithm used to train model? The authors should add more analysis on time complexity.\n2. Note that the datasets used in this paper are relatively small, so what is the capability of the proposed framework on large datasets? In fact, label ranking is often associated with high time complexity.\n3. Are there any selection criteria for scoring function? Note that only DRAM+LN is tested on all datasets and how do different scoring functions affect the results? \n4. Multi-label ranking may still require a high cost. The authors can add some discussion on cost of various methods (the proposal, LE and LDL methods), for example, the cost comparison when achieving the same performance.\n5. Some spelling mistakes need to be corrected, e.g., “internel” in the third sentence in section 2.1 should be “internal”.\n\n 1. What is the time cost of the EM algorithm used to train model? The authors should add more analysis on time complexity.\n2. Note that the datasets used in this paper are relatively small, so what is the capability of the proposed framework on large datasets?\n3. Are there any selection criteria for scoring function? Note that only DRAM+LN is tested on all datasets and how do different scoring functions affect the results? \n4. Multi-label ranking may still require a high cost. The authors can add some discussion on cost of various methods (the proposal, LE and LDL methods), for example, the cost comparison when achieving the same performance.\n Yes", " This paper aims at predicting label distribution from multi-label ranking. It is different from the existing label distribution learning (LDL) and label enhancement (LE), where LDL learns a predictive mapping on training set with label distributions annotations and LE generates label distributions from instances with logical labels. This work is a compromise w.r.t. annotation cost but has good guarantees for performance. Strengths:\n1. This paper has a good motivation.\n2. This paper proposed a framework of label distribution predicting from multi-label ranking.\n3. Some experimental results validate the superiority of the proposed method.\n\nWeaknesses:\n1. It seems that the theoretical analysis has little relatedness with the proposed framework and there are not experiments to support the theoretical results. It is something that looks beautiful. \n2. The trade-off parameter $\\lambda$ is selected from a very wide range by five-fold cross-validation. Is there a good default choice? Parameter sensitivity should be made. Besides, five-fold cross-validation is conducted over the whole data set or only the 70% data for training?\n I have listed some weaknesses above. Here, my further concern is why the general LDL data sets are not used in experiments. It is very easy to construct data sets with label rankings and with binary labels from LDL data set. The 15 LDL data sets release by Geng can be found at: http://palm.seu.edu.cn/xgeng/LDL/index.htm Authors points out two limitations in Section 6. For the 2nd one, i.e., \"If several labels actually describe the instance to the same degree, then requiring experts to give the strict order of these labels may lead to errors and invalidate Theorem 1\", authors should extend the proposed framework to deal with such kind of data sets, not treat it as a future work. After all, it is more realistic that some labels have the same description degree." ]
[ -1, -1, -1, -1, 5, 6, 5, 7 ]
[ -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "9-_wBdkaN6d", "SNu_osq8WJ", "IaIn2W2yIVI", "SG9wdAsr77q", "nips_2022_8wtaJ9dE9Y2", "nips_2022_8wtaJ9dE9Y2", "nips_2022_8wtaJ9dE9Y2", "nips_2022_8wtaJ9dE9Y2" ]
nips_2022_tbdk6XLYmZj
Learning Best Combination for Efficient N:M Sparsity
By forcing N out of M consecutive weights to be non-zero, the recent N:M fine-grained network sparsity has received increasing attention with its two attractive advantages over traditional irregular network sparsity methods: 1) Promising performance at a high sparsity. 2) Significant speedups when performed on NVIDIA A100 GPUs. Current implementation on N:M sparsity requires a tedious pre-training phase or computationally heavy from-scratch training. To circumvent these problems, this paper presents an efficient solution for achieving N:M fine-grained sparsity from scratch. Specifically, we first make a re-formulation to convert the N:M fine-grained sparsity into a combinatorial problem, in which, the object falls into choosing the best weight combination among $C_M^N$ candidates. Then, we equip each combination with a learnable importance score, which can be jointly optimized along with its associated weights. Through rigorous proof, we demonstrate that the magnitude of the optimized score well reflects the importance of its corresponding weights combination to the training loss. Therefore, by gradually removing combinations with smaller scores till the best one is left, N:M fine-grained sparsity can be efficiently optimized during the normal training phase without any extra expenditure. Comprehensive experimental results have demonstrated that our proposed method for learning best combination, dubbed as LBC, consistently increases the efficacy of the off-the-shelf N:M methods across varying networks and datasets. Our project is released at https://github.com/zyxxmu/LBC.
Accept
The paper presents a novel method on training N:M sparse-weight neural networks, which can be significantly accelerated by NVIDIA A100 GPUs. The optimal N:M pattern can be found via jointly solving a series of combinatorial problems with finite collections of candidates. Majority of the reviewers found the paper The AC believes that the concern of 11EU can be solved by re-phrasing the descriptions but doesn't affect the effectiveness and novelty of this paper.
train
[ "uvMZVQK27k2", "iAXdxkkEWwl", "qt2LrB-IWPt", "6g1wcdDVEC", "ZKKWwTjWGyu", "4U_ghpcF65a", "aUkaOGnFAOy", "1BHL8Ggqb5K", "1w09gsKz66v", "9oHhSisvYqAZ", "-Qk6bouBqY5", "GBInIo0p3p9", "l5U-UHBcYey", "aXWq5W3qXP_", "w4wOswr4Bxx" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your timely and constructive feedback. Also, we appreciate your effort in reviewing this paper. We believe we introduce a sound approach and have made a strong contribution to N:M sparsity, which are already appraised by the other three reviewers. Our further responses are provided below and we wish this time we can satisfy all your concerns.\n\nResponse to Q1: \nWe understand your concern. In our final version, we will rewrite contents w.r.t. “divide-and-conquer” to eliminate any possible confusion. We wish that more focuses can be put on what we have done in this paper when making your final decision. Thanks.\n\nResponse to Q2:\nFull of respect, we first highly suggest going through other reviewers’ comments. This paper is well favored by all other reviewers. On one hand, our key contribution falls into the reformulation of N:M sparsity and an efficient solution that drastically reduces the training cost of existing approaches with even higher performance, which is appreciated by all reviewers. On the other hand, our learnable scores do quite differ from existing studies such as the mentioned network slimming. See our response to Q3, which may provide a clearer example to show how our score evolves and help achieve N:M sparsity.\n\nResponse to Q3:\nLet us make a toy example. \nAn importance score vector is initialized as S = [1, 1, 1, 1, 1, 1].\nIf removing S from Eq.(8), S will be fixed to its initial statement since it does not involve in the computing graph as analyzed in our last response. Consequently, we have no idea which combinations should be removed.\nOn the contrary, keeping S in Eq. (8) enables the updating of S. Therefore, at the next iteration, S can be changed, for example, to state [0.1, 0.8, 1.5, 0.9, 1.2, 1.3]. And then, the combination candidate with the lowest importance score 0.1 is removed. In this manner, N:M sparsity can be efficiently achieved by gradually removing low-scored candidate subsets along the network training.", " I would like to thank the authors for the response. The additional results and explanation cleared my questions Q4-Q6. But I think my questions 1-3 were not quite addressed.\n\nQ1: According to the definition on Wikipedia at https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm, \"divide-and-conquer\" is a well established and widely accepted term in computer science. And the proposed method is definitely not a divide-and-conquer method. Misusing such a term is not a practice and it is the authors' responsibility to eliminate such confusion by using correct terms. It is not the audience's fault that they \"misunderstood\" what the authors tried to \"deliver\".\n\nQ2: I still don't buy the argument. The authors' just repeated my argument and I still found the *technique* used is not novel.\n\nQ3: I am not sure what will be learned for something that can be safely removed from the formulation.", " Thanks for your response, I have updated the official comments.", " Dear Reviewer biEJ,\n\nThanks again for your time and efforts. As the deadline for discussion is approaching, it would be nice of you to let us know whether our answers have solved your concerns so that we can better improve our work. We are happy to provide any additional clarifications that you may need.\n\nBest wishes,\n\nPaper1092 Authors", " Dear Reviewer 378D,\n\nThanks again for your time and efforts. As the deadline for discussion is approaching, it would be nice of you to let us know whether our answers have solved your concerns so that we can better improve our work. We are happy to provide any additional clarifications that you may need.\n\nBest wishes,\n\nPaper1092 Authors", " Dear Reviewer VQRc,\n\nThanks again for your time and efforts. As the deadline for discussion is approaching, it would be nice of you to let us know whether our answers have solved your concerns so that we can better improve our work. We are happy to provide any additional clarifications that you may need.\n\nBest wishes,\n\nPaper1092 Authors", " Dear Reviewer 11EU,\n\nThanks again for your time and efforts. As the deadline for discussion is approaching, it would be nice of you to let us know whether our answers have solved your concerns so that we can better improve our work. We are happy to provide any additional clarifications that you may need.\n\nBest wishes,\n\nPaper1092 Authors", " Thanks for your kind comments. We sincerely wish our response can well address your concerns so as to raise the rating score. We believe we introduce a sound approach and have made a strong contribution to N:M sparsity, which are also highly-appraised by Reviewer VQRc and Reviewer 378D.\n\n**Q1**: *From the analysis of [1] Efficient Neural Network Training via Forward and Backward Propagation Sparsification, the gradient-based sparse training method cannot exceed the acceleration above an upper bound of around 1.5x since the gradient to structure parameters require dense gradient calculation, even though the gradient to weights is sparsified and the forward propagation is sparsified. I think this is a fundamental problem and unless the discussions around training costs are corrected, it should not be published. Otherwise it will again mislead the community. If this issue can be corrected, I am happy to raise the rating score to be positive.*\n\n**A1**: We deeply understand your concern about the sparsity acceleration, which indeed motivates us to focus on N:M sparsity. Specifically, as illustrated in [1], the gradients of the pruned weights are non-zero as the pruned weights still participate in the forward propagation albeit their zero values. Therefore, the dense backpropagation is unavoidable in existing sparse training methods. \n\nLuckily, this obstacle in traditional irregular network sparsity has been well addressed under the setting of N:M regular sparsity. The unique characteristics in N:M has been well supported by NVIDIA Ampere Core to reach sparse training.\nPlease kindly refer to figure 12, page 32 in [2] (link: https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/nvidia-ampere-architecture-whitepaper.pdf). As can be seen, the A100 Sparse Tensor Core **skip** the compute of pruned weights instead of treating them as 0s, leading to a smaller matrix multiply and achieving a 2x speedup in 2:4 sparsity. Therefore, **the pruned weights no longer participate into both forward and backward propagation.** Indeed, the upper bound for reducing the training cost of N:M sparse training is (M/N)x. \n\nThanks again for your constructive comments on our work. We hope this rebuttal does well address your concerns. Besides, we will add a discussion between N:M sparsity and the claim in [1] in our final version to avoid any possible confusion for the community. If you have any other questions, please let us know and we are more than glad to discuss with you. \n\n[1] Efficient Neural Network Training via Forward and Backward Propagation Sparsification. In NeurIPS, 2021.\n\n[2] Nvidia a100 tensor core gpu architecture. https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/nvidia-ampere-architecture-whitepaper.pdf, 2020.\n\n", " Many thanks for your responsible reviews that will help us improve the manuscript. Please see the following answers to your questions.\n\n**Q1**: *Does it mean that during training all combinations contribute to the output of the layer? If true it will mean higher memory cost during training and higher computational complexity. From another perspective, if S is implemented via straight through estimator than it is sampled during the training and the gradient propagates only to the elements with S = 1.* \n\n**A1**: First, it is true that all combinations contribute to the output since the combination scores participate in the forward and backward propagation. However, please note that it leads to negligible costs on computation and memory since only one binary mask is finally introduced to indicate the removal/preservation of weights without any complex matrix multiplication (see Eq.(8)). In the backpropagation, the gradient of combination scores is obtained by a simple dot multiplication as W*∂L/∂W. Therefore, the overall memory cost as well as computational complexity are still drastically reduced comparing to other methods with dense gradient calculation [1] and pre-training burden [2], which are avoidable in our method (see **Q2** for an experimental validation).\n\n[1] Learning N: M fine-grained structured sparse neural networks from scratch. In ICLR, 2020\n\n[2] Nvidia a100 tensor core gpu architecture. https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/nvidia-ampere-architecture-whitepaper.pdf, 2020.\n\n**Q2**: *At my current understanding, the cost of initial epochs is going to be higher due the requirement of forward \\& backward to be performed over all candidates. It will be nice to have some latency study as authors primary focused on efficiency of N:M sparsity.*\n\n**A2**: Please kindly refer to our answer to **Q1** first. Following your valuable suggestion, we provide the training time comparison for sparsifying ResNet-50 on ImageNet below.\n\n| N:M Pattern | Method | Training Time (NVIDIA A100 GPU days) |\n| ------------- | ------ | -------------------------------------- |\n| 2:4 | Baseline | 3.32 | \n| 2:4 | ASP | 4.12 |\n| 2:4 | SR-STE | 2.76 |\n| 2:4 | LBC | 2.39 |\n\nAs can be can, our LBC can effectively reduce the training latency for N:M sparse training. These results will be included in our final version.\n\n**Q3**: *Score matrix. Does it mean that the score is a real value learned by the model? Is there any evidence that it does reflect importance? Authors provide derivation to end up with magnitude times gradient equation. However, providing some empirical evidence will be beneficial.* \n\n**A3**: Exactly, the score is indeed a real and learnable value. In Sec. 3.4, we have formally proved that our introduced score matrix can be used to measure the relative importance between candidate subsets. \n\nFollowing your constructive advice, we further introduce three additional criteria for comparison with our combination score. These three additional criteria include 1) inversely sorting our learned scores; 2) pruning using the magnitude-based criterion, 3) the gradient-based criterion. The experiments are performed using ResNet-50 on ImageNet and the results below well demonstrate the efficacy of the proposed learnable score for reflecting the importance of corresponding combinations. \n\n| Method | N:M Pattern | Top-1 Accuracy |\n| --------------------------- | ----------- | -------------- |\n| Combination score | 2:4 | 77.2 |\n| Combination score (Inverse) | 2:4 | 72.9 |\n| Magnitude-based | 2:4 | 75.8 |\n| Gradient-based | 2:4 | 76.1 |\n\n**Q4**: *Can authors explain how to set newly introduced hyper-parameters and what is the intuition behind the choice?*\n\n**A4**: The introduced hyper-parameters in this paper include $t_i$ and $t_f$, respectively denoting the initial and final epochs to remove combinations. As analyzed in Sec 4.4, larger values of $t_i$ and $t_f$ enable more robust performance but also more training FLOPs. Therefore, the users can flexibly switch the values of these two hyper-parameters to achieve a trade-off between training costs and model performance according to the availability of hardware resources.\n\n**Q5**: *Was the training recipe exactly the same when compared to SR-STE?*\n\n**A5**: Definitely. For fair comparison, we keep the same training recipe with SR-STE including learning rate, training epoch, *etc*.", " Thanks for your in-depth review that will help us strengthen the manuscript. We hope our response can address your concern here.\n\n**Q1**: *Characterizing the problem as jointly solving a series of combinatorial problems is trivial, and more importantly, does not simply the original problem in any sense. The authors claimed that they adopted a \"divide-and-conquer\" method. But according to my understanding of \"divide-and-conquer\", this claim is not true. The problem is not simplified into simpler forms with lower complexity. It is still a complex joint optimization problem*. \n\n**A1**: With full respect, what we aim to deliver may be misunderstood. We do not intend to introduce traditional \"divide-and-conquer\" methods to solve our optimization problem. Here, our divide-and-conquer is much like algorithm where from a large set of proposals only one is selected in the end (**see summary of Reviewer 378D**). What we aim to deliver seems to be well understood by other reviewers. Besides, characterizing the problem as jointly solving a series of combinatorial problems is also supported or even highly appraised by other reviewers. We apologize if our word usage causes any confusion and wish that our explanation can well address your concern.\n\n**Q2**: *The authors uses a simple method that learns a score for each combination. This type of methods is not new and can date back to network slimming (Liu et al., 2017).*\n\n**A2**: With all due respect, though the concept of learnable scores exists in other methods, we would like to highlight that our LBC has its unique designs to solve N:M sparsity. We do not learn the importance of each weight as other methods. Instead, our learnable score is particularly designed to reflect the importance of each combination based on our reformulation of N:M sparsity, which has also been well proofed in Sec.3.4 of our paper. As a result, we achieve state-of-the-art performance compared with other N:M methods even with far fewer training costs. \n\n**Q3**: *The only place that the learnable scores are used is eq. (8). But if I understand correctly, removing the S score in eq. (8) does not influence the whole methods at all. If that is the case, I am quite doubtful of what scores are learnt finally.*\n\n**A3**: Nice question. Intuitively, removing S does not influence the result of Eq. (8) at least in the forward propagation. However, the motive of retaining S in Eq. (8) is actually to derive the gradient of S in the backward propagation (see Eq.(9)). Recall that in each training epoch, we remove a part of low-scored candidates (Lines 7-8, Alg.1) and then the binary mask is generated (Eq. (8)). If removed from Eq. (8), S will be frozen and unlearnable since it does not involve in the computing graph. Consequently, the generated binary mask is only related to the initialization of S. Therefore, S needs to be kept in Eq.(8). In Sec.3.4 of our paper, we have formally proved that the learned score S in this manner can well reflect the relative importance between different candidate subsets. Also, we further conduct experiments to show that our learnable score performs much better than existing criteria (**kindly refer to our answer to Q3 of Reviewer 378D**). To avoid confusion, the above discussion will be added in our paper (right after Eq.(8)).\n\n**Q4**: *For a paper that emphasizes its training efficiency, there is no energy or wall-time comparison of the proposed method and other baselines.* \n\n**A4**: We appreciate this valuable comment, which helps us to further improve the quality of our paper. The wall-time comparison for ResNet-50 at 2:4 pattern is provided below, where the proposed LBC shows a superior training efficiency compared with other baselines. This experiment will be included in our paper.\n\n| N:M Pattern | Method | Training Time (NVIDIA A100 GPU days) |\n| ------------- | ------ | -------------------------------------- |\n| 2:4 | Baseline | 3.32 | \n| 2:4 | ASP | 4.12 |\n| 2:4 | SR-STE | 2.76 |\n| 2:4 | LBC | 2.39 |\n\n**Q5**: *Whether Nvidia A series GPU can exploit the gradually increasing N:M sparsity to improve training efficiency?*\n\n **A5**: Definitely. During the gradual pruning procedure, increasing number of blocks will satisfy the n:m pattern due to the removal of redundant combinations, which therefore can be supported by NVIDIA A100 GPU to improve training efficiency. \n\n**Q6:** *The superscript of $L^l$ in the last term of eq. (9) seems to be in the wrong place.*\n\n**A6**: We highly appreciate your careful review. It is a typo and will be corrected. Besides, a more comprehensive checking will be made in our final version.", " Thanks for your constructive and supportive comments.\n\n**Q1**: *The authors are encouraged to conduct experiments on other tasks.*\n\n**A1**: Following your advice, we further conduct experiments on two other tasks including object detection and instance segmentation. Below displays the experimental results in which the proposed LBC performs best on both tasks.\n\nObject detection results on COCO:\n\n| Model | Method | Sparse Pattern | mAP |\n| ---------- | ------ | -------------- | ---- |\n| F-RCNN-R50 | - | Dense | 37.4 |\n| F-RCNN-R50 | SR-STE | 2:4 | 38.2 |\n| F-RCNN-R50 | LBC | 2:4 | 38.5 |\n| F-RCNN-R50 | SR-STE | 2:8 | 37.2 |\n| F-RCNN-R50 | LBC | 2:8 | 37.3 |\n\n Instance segmentation results on COCO:\n\n| Model | Method | Sparse Pattern | Box mAP | Mask mAP |\n| ---------- | ------ | -------------- | ------- | -------- |\n| M-RCNN-R50 | - | Dense | 38.2 | 34.7 |\n| M-RCNN-R50 | SR-STE | 2:4 | 39.0 | 35.3 |\n| M-RCNN-R50 | LBC | 2:4 | 39.3 | 35.4 |\n| M-RCNN-R50 | SR-STE | 2:8 | 37.6 | 33.9 |\n| M-RCNN-R50 | LBC | 2:8 | 37.8 | 34.0 |\n\n**Q2**: *The authors only report the training FLOPs, the authors are encouraged to report the training time.*\n\nThe training time comparison for sparsifying ResNet-50 on ImageNet is provided in the following, which will also be included in our final version.\n\n| N:M Pattern | Method | Training Time (NVIDIA A100 GPU days) |\n| ------------- | ------ | -------------------------------------- |\n| 2:4 | Baseline | 3.32 | \n| 2:4 | ASP | 4.12 |\n| 2:4 | SR-STE | 2.76 |\n| 2:4 | LBC | 2.39 |\n", " This work is focused on efficient learning of N:M sparsity from scratch. Finding the optimal N:M sparsity pattern is characterized as jointly solving a series of combinatorial problems with finite collections of candidates. The authors proposed to associate a learnable score parameter with each possible combination and learn it from data with the help of straight-through-estimator (STE). The experiments showed the proposed training method yields slightly better results with much fewer FLOPs compared to stronger baselines and much better performance than others. Strengths:\n\n+ This work successfully showed that N:M sparsity under a kind of gradual pruning manner can work well on large modern neural networks. The gradual pruning enables the possibility of a efficient method that trains N:M sparse models from scratch so that the expensive dense training process can be avoided.\n\n+ Experiments are promising in terms of performance and FLOPs.\n\n--------------------------------\n\nWeaknesses:\n\n- The biggest issue of this work if over-claiming. Characterizing the problem as jointly solving a series of combinatorial problems is trivial, and more importantly, does not simply the original problem in any sense. The authors claimed that they adopted a \"divide-and-conquer\" method. But according to my understanding of \"divide-and-conquer\", this claim is not true. The problem is not simplified into simpler forms with lower complexity. It is still a complex joint optimization problem. And the authors uses a simple method that learns a score for each combination. This type of methods is not new and can date back to network slimming (Liu et al., 2017).\n- The only place that the learnable scores are used is eq. (8). But if I understand correctly, removing the S score in eq. (8) does not influence the whole methods at all. If that is the case, I am quite doubtful of what scores are learnt finally.\n- For a paper that emphasizes its training efficiency, there is no energy or wall-time comparison of the proposed method and other baselines. This is related to another of my question that --- whether Nvidia A series GPU can exploit the gradually increasing N:M sparsity to improve training efficiency? Please see the weaknesses part and address my questions there.\n\nThe superscript of $\\mathcal{L}^l$ in the last term of eq. (9) seems to be in the wrong place. Please see the weaknesses part and address my questions there.", " This paper formulates the N:M sparsity as the combinatorial optimization problem. \nThis formulation is a well-motivated and promising direction. Then the authors use a learnable score matrix to measure \nthe candidate's importance. The authors use the gradients of group elements to update the score matrix. Strengths:\n\n\nThis paper is well-written and easy to follow.\n\nThe performance gain is consistent on various sparse ratios and models.\n\nAt present, reducing the sparse model training cost can motivate the research community to study sparse training acceleration.\n\nThe source code is available.\n\nWeaknesses:\nThe authors are encouraged to conduct experiments on other tasks.\n\nThe authors only report the training FLOPs, the authors are encouraged to report the training time. See Strengths And Weaknesses\n\n\n###############################################\n\nPost-rebuttal: I have read the rebuttal carefully, and all of my comments/questions were addressed. Thanks. Na", " Authors tackle the problem of a learnable structural sparsity parametrized by N:M pattern. The best sparsity patterns per layer are found via divide-and conquer like algorithm where from a large set of proposals only one is selected in the end. At first, all possible candidate combinations are proposed, then, during training they are ranked according to the importance variable. At every algorithm step, the combination with the lowest score is removed from the pool. Finally, the last standing combination is selected. Experiments are performed on Resnet50 and DIET model on Imagenet. \n Paper provide a comprehensive overview of existing pruning paradigms (structured and unstructured). Ampere sparsity N:M is also clearly introduced. \nThe problem of structural pruning is usually not formulated a combinatorial problem. The fact that authors do makes the paper stronger and the approach sound. Particularly, formulation in section 3.2 is sufficient. The implementation of learning score matrix S is a bit unclear and answering questions from the section below will help to understand the paper better. \nThe code is available to reviewers, I went over it but didn't run. Formulation of the implementation is a bit confusing. Does it mean that during training all combinations contribute to the output of the layer? If true it will mean higher memory cost during training and higher computational complexity. \nFrom another perspective, if S is implemented via straight through estimator than it is sampled during the training and the gradient propagates only to the elements with S = 1. Please clarify this point. \nScore matrix. Does it mean that the score is a real value learned by the model? Is there any evidence that it does reflect importance? Authors provide derivation to end up with magnitude times gradient equation. However, providing some empirical evidence will be beneficial. \nCan authors explain how to set newly introduced hyper-parameters and what is the intuition behind the choice? \nWas the training recipe exactly the same when compared to SR-STE? At my current understanding, the cost of initial epochs is going to be higher due the requirement of forward0backward to be performed over all candidates. It will be nice to have some latency study as authors primary focused on efficiency of N:M sparsity. ", " This paper deals with the problem of N:M sparsity and proposes an effective method with structure parameter to weigh the importance of different combination. The final result is appealing. Strengths:\n1. Good final pruning results\n\n\nWeakness:\n1. From the analysis of [1] Efficient Neural Network Training via Forward and Backward Propagation Sparsification, the gradient-based sparse training method cannot exceed the accelearation above a upper bound of around 1.5x since the gradient to structure parameters require dense gradient calculation, even though the gradient to weights is sparsified and the forward propagation is sparsified. I think this is a fundamental problem and unless the discussions around training costs are corrected, it should not be published. Otherwise it will again mislead the community. If this issue can be corrected, I am happy to raise the rating score to be positive. \n\nAfter rebuttal: I got the difference. The gradient calculation to auxiliary parameters only finally cares about the gradient to weights. I raise the score.\n The problem of training cost saving should be corrected. The problem of training cost saving should be corrected." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 3 ]
[ "iAXdxkkEWwl", "9oHhSisvYqAZ", "4U_ghpcF65a", "w4wOswr4Bxx", "aXWq5W3qXP_", "l5U-UHBcYey", "GBInIo0p3p9", "w4wOswr4Bxx", "aXWq5W3qXP_", "GBInIo0p3p9", "l5U-UHBcYey", "nips_2022_tbdk6XLYmZj", "nips_2022_tbdk6XLYmZj", "nips_2022_tbdk6XLYmZj", "nips_2022_tbdk6XLYmZj" ]
nips_2022_6H2pBoPtm0s
ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision transformers as backbones to extract features for a given person instance and a lightweight decoder for pose estimation. It can be scaled up from 100M to 1B parameters by taking the advantages of the scalable model capacity and high parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose tasks. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our basic ViTPose model outperforms representative methods on the challenging MS COCO Keypoint Detection benchmark, while the largest model sets a new state-of-the-art. The code and models are available at https://github.com/ViTAE-Transformer/ViTPose.
Accept
This submission received positive reviews. After rebuttal and discussions, all the reviewers feel positive about this submission with raised concerns addressed. After checking all the reviews and rebuttal, the AC stands on the reviewers' side and believe the current work is suitable for publication in this venue. The authors shall revise the manuscript according to the suggestions from the reviewers in the camera-ready submissions.
train
[ "g1zIftl3BW", "fUFmPud53T_", "oQGVgj1TsZU", "qpWxKv4yI-g", "wArqLFBkdnI", "bj759I0exbM", "0pmrjJgt2t", "ZMMvMh8ctVy", "46AzOFeL0nd", "F1K_2ig6GI", "E9jPdR4MC0I", "I4F4wdyIyWm", "DPqq0H43in_", "Rzcqq8Teoc0", "7dG5O3_FS2", "O1hmiFQgI4I", "G1nIEvXS55U", "V3QsQXQklIA", "1-KjMQxveOd", "oNWIMSRZiPu", "qeph9O4Yy3z", "TO2Drlc6R98", "5Edx4cWukJV" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your valuable comments and suggestions! We are encouraged by the resolution of your major concerns and appreciate your constructive comments to improve our work. We promise that we will incorporate all feedback in the revised version and carefully amend the paper.", " The authors have well addressed my concerns, especially on motivation and novelty. \nThe concern about training paradigm flexibility was not addressed. I still believe CNN has similar abilities as ViTPose. However, this is a minor concern. I hope the authors can carefully amend the paper as a few severe confusion is discussed above.\nI will update my rating accordingly.", " Thanks for your valuable comments and suggestions! We will conduct the experiments with variant seeds and report more detailed results in the revised version. Besides, following your insightful suggestion, we will highlight that this is only a starting exploration in the revised version. We sincerely appreciate your constructive comments to improve our work.", " Thank you for your answers and sorry for my late reply.\n\nI still think that the \"distillation via the input\" is not very well justified (it kind of make sense to try to distill a big model by forcing a small model to emulate its outputs), however even after reading the author's response it is still not clear why having those extra tokens coming from the big model would help to transfer knowledge (since those vectors are very close to the inputs -- they perform attention over input vectors, then its not clear how much more knowledge is store in those weights for a big model versus a small model) . Nonetheless if the results look to be significant (please confirm the AP improvements are not within noise), then maybe something is indeed happening that could deserve further investigation. I would advise the authors to highlight that this is only a starting exploration as I believe really making sure that this does distillation would deserve a more specific and detailed study.\n\nThanks.", " We sincerely thank the ACs for your kind reminder! We look forward to the reviewers' response and are willing to discuss these concerns with the reviewers further. Your thoughtful reviews help us a lot in improving the work.", " Thanks for your valuable comments and suggestions! We are encouraged by the resolution of your issue and appreciate your constructive comments to improve our work.", " Thanks for the authors’ response. \n\nThe response fully solves my concerns. I will update my rating from 5 to 6. \n", " Dear reviewers,\n\nThe deadline is approaching, please respond to the authors and see whether they have addressed your raised issues.\n\nBest,\nYour AC", " We sincerely thank all the reviewers again for your thoughtful feedback and appreciate your efforts in reviewing the paper. We have endeavored to address the raised concerns regarding the difference between the knowledge and visual tokens and the computational complexity analysis from Reviewer 6Ljs, the updated results for training on multiple pose datasets with animal pose estimation, wholebody keypoint estimation, and hand keypoint estimation tasks in a joint learning pipeline from Reviewer 6Ljs and ERVW, the discussion of novelty and contribution of ViTPose from Reviewer 6Ljs and ERVW, the motivation of knowledge token design and the definition of layer-wise decay from Reviewer nUSy, and provide more analysis about the speed and performance of ViTPose to address the concerns raised by Reviewer vNx3. We are happy to discuss them with you in the openreview system if you feel that there still are some concerns/questions. We also welcome new suggestions/comments from you!", " We sincerely thank you for the careful and thoughtful comments. Below we address the key concerns and promise will incorporate all feedback in the revised version.\n\n*1. In Table8, compared to HRFormer, ViTPose has more parameters but with a faster speed (e.g., ViTPose-B vs. HRFormer-B), which should be explained in detail.*\n\n**A1**: Thanks for your pointing out. We will add the discussion in the revised version. ViTPose-B employs plain vision transformers as encoders for feature extraction. The plain vision transformers consist of one single branch that operates on a relatively small feature resolution, i.e., 1/16 feature resolution. Thanks to that, ViTPose can benefit from the hardware-friendly operations in plain vision transformers, e.g., the attention and FFN modules of vision transformers are composed of parallel dense matrix multiplication operations, without the need to manipulate the memory with reshaping or resize operations frequently. HRFormer-B is also a fantastic work that adapts vision transformers for pose estimation. However, HRFormer employs a multi-branch structure with high-resolution branches, low-resolution branches, and interaction modules. The operation on high-resolution branches (1/4) is much more time-consuming for vision transformers, and thus other branches need to wait for the sync among the multiple branches. Besides, due to the interaction between multi-resolution features, HRFormer needs to frequently conduct reshaping and resize operations, which involves several memory manipulations. These memory operations break the pipeline of GPU execution and thus slow down the speed.\n\n*2. The authors are also encouraged to employ a smaller model (e.g., ViTPose-S or stack smaller layers) with comparable parameters to previous methods to show the superiority of this baseline.*\n\n**A2**: Thanks for your suggestion. We include a small model ViTPose-S in Table 8 for comparison. Generally, ViTPose-S with 24M parameters obtains **73.8 AP** with **1432 fps** on MS COCO. It is better than ResNet-50 (71.8 AP, 1351 fps) and ResNet-152 (73.5 AP, 829 fps) and comparable with the frontier transformer-based model HRFormer-S (73.8 AP, 269 fps). Compared with HRNet-w32 (74.6 AP, 916 fps), the performance of ViTPose-S is marginally weak, but it shows a much faster inference speed. The overview of performance and inference speed comparison is available in [link](https://ibb.co/n1RvgtZ). As demonstrated in the figure, ViTPose also sets the Parto Frontier in the small size models.\n\n*3. Besides, such a method in this manuscript should be a very simple baseline, and it is easy to think and implement. What is the reason for it surpassing the previous transformer-based method? It should be discussed.*\n\n**A3**: Thanks for your comments. It is a good question. Previous works like TokenPose, TransPose, and HRFormer obtain superior performance on pose estimation with vision transformers. However, most of them treat transformers as post-processors for feature enhancement, which do not fully explore the representation ability of transformers. HRFormer makes use of the strong representation ability of vision transformers but with task-favored multi-resolution designs. Recently, the community has demonstrated that plain vision transformers themselves could have strong representation abilities and perform well in image classification, object detection, segmentation, VQA, etc [1,2,3,4]. Such observation inspires us to rethink and explore **whether plain and simple vision transformers have strong representation ability in pose estimation tasks**. This direction has seldom been explored in previous studies. ViTPose makes a step to find the answer and surprisingly finds that plain vision transformers with good representation ability and proper training can solve pose estimation tasks well and demonstrates several good properties, including simplicity, scalability, flexibility, and transferability. **Owing to the strong representation ability of transformer blocks, the vision transformer itself can already encode features of good linear separability for pose estimation.** As a result, the decoder can be extremely simple (e.g., some bilinear upsample layers) and obtains a competitive performance. These findings can help us to further understand how to better adapt and design vision transformers for pose estimation tasks. We hope this study could serve as a starting point in adapting plain vision transformers for different vision tasks and exploring the interesting and beneficial properties of vision transformers.\n\n> [1] Bao, Hangbo, et al. \"BEiT: BERT Pre-Training of Image Transformers.\" ICLR 2021.\n\n> [2] He, Kaiming, et al. \"Masked autoencoders are scalable vision learners.\" CVPR 2022.\n\n> [3] Li, Yanghao, et al. \"Exploring plain vision transformer backbones for object detection.\" ECCV 2022.\n\n> [4] Yu, Jiahui, et al. \"Coca: Contrastive captioners are image-text foundation models.\" arXiv 2022.", " We sincerely thank you for the careful and thoughtful comments. Below we address the key concerns and promise will incorporate all feedback in the revised version.\n\n*1. What is the intuition behind the \"token-based distillation\"? Since it is trained to only work for a specific set of parameters, I am not sure to understand why adding it to a smaller model would suddenly distill knowledge from the big model to the small. In Table 7, the gains are not very clear (I am not super familiar with the type of variance one may encounter on the COCO keypoint detection benchmark, can the authors confirm whether or not those results are significant?). It would be great if the authors could comment on that point. If this is inspired by past work, it should be cited.*\n\n**A1**: Thanks for your pointing out. Distillation of knowledge from the output is a common practice for both CNN and transformers. However, one major difference between CNN and transformer structure is the flexibility of inputs. For example, one can simply append extra tokens to the transformer’s inputs, as the attention and FFN modules of the transformer treat the inputs as 1D sequences. Previous works in classification [1] and NLP pretraining [2] tend to append an extra learnable token (cls toke or task token) to gather classification-sensitive information from the visual tokens and encode the task information to guide the network’s output. Inspired by these works, we wonder whether an extra token can serve for the knowledge distillation tasks by memorizing the knowledge from the larger models and guiding the smaller model via attention. To this end, we design a simple knowledge token-based distillation method and utilize the properties of vision transformers. Specifically, the knowledge token is trained to gather the pose knowledge from the larger models. After that, we freeze the knowledge toke and append it to the inputs of the smaller models to aid the smaller models’ training. Thanks to the flexibility of attention operations, the visual tokens can retrieve knowledge from the knowledge token in each transformer block and adapt its feature according to the knowledge, thus enabling small models to better focus on modeling the pose-related information. Different from the cls token and task token that is updated along with the models’ training and used for feature gathering, the knowledge token explored in the paper is frozen and expected to help the small models learn better feature representation for pose estimation tasks. Compared to the classic output-based distillation (+0.5 AP, +10\\% training memory), this simple method brings about 40\\% performance gain (+0.2 AP) with almost no extra training memory cost. Besides, it can also work with the classic output-based distillation to further improve the small models’ performance. It also should be noted that the exploration is just a starting point and baseline for exploring the flexibility of vision transformers for knowledge distillation. We hope it can provide some insights to the community and inspire new studies in exploring the transformers’ properties.\n\n> [1] Dosovitskiy, Alexey, et al. \"An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.\" ICLR 2020.\n\n> [2] Kenton, et al. \"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.\" NAACL 2019.\n\n*2. What is layer-wise decay? How useful is that? Can you cite the paper for this?*\n\n**A2**: Layer-wise decay is a useful training technique for training both vision and NLP transformers [3,4,5,6]. It aims at setting different learning rates for different layers, i.e., $lr_i = lr * lrd^{(d-i)}$, where $i$ is the depth of the transformer layer, $d$ is the number of layers in the transformer model, and $lrd$ is the layer-wise learning rate decay rate. We disable the layer-wise learning rate decay for ViTPose-B to investigate the impact of this technique by setting each layer with the same learning rate. The performance of the model **drops from 75.8 AP to 74.4 AP**. It is because different layers of the vision transformer are responsible for encoding different information, i.e., the later layers are more task-specific than the former layers [7]. Thus, it is beneficial to tune the later layers faster than the former layers to adapt the vision transformer better and obtain better performance. We will provide the reference in the revised paper.\n\n> [3] Bao, Hangbo, et al. \"BEiT: BERT Pre-Training of Image Transformers.\" ICLR 2021.\n\n> [4] He, Kaiming, et al. \"Masked autoencoders are scalable vision learners.\"CVPR 2022.\n\n> [5] Xie, Zhenda, et al. \"Simmim: A simple framework for masked image modeling.\" CVPR 2022.\n\n> [6] Yang, Zhilin, et al. \"Xlnet: Generalized autoregressive pretraining for language understanding.\" Neurips 2019.\n\n> [7] Howard, Jeremy, and Sebastian Ruder. \"Universal Language Model Fine-tuning for Text Classification.\" ACL 2018.", " *3. In lines 76-77, the authors claimed this model can be extended from a single pose dataset to multiple pose datasets. This can be confusing as in human pose estimation, multi-pose datasets normally refer to images with more than one pose to predict. It looks like the authors are describing extending the method to multi-dataset training.*\n\n**A3**: Sorry for the misunderstanding. We will make it clearer in the revised version. The multiple pose datasets mentioned in the paper represent joint training with different pose estimation datasets. During training, we randomly sample images from each dataset at each iteration and feed them into the network for feature extraction. Then, the extracted features are fed to dataset-specific heads for pose prediction and loss computation. Regarding whether there are multiple or single persons in the input image, we follow the common top-down pose estimation pipeline [10], using a person detector to detect each instance of a person in the input image, followed by the pose estimation algorithm to estimate the pose of each person.\n\n> [10] Xiao, Bin, Haiping Wu, and Yichen Wei. \"Simple baselines for human pose estimation and tracking.\" ECCV. 2018.\n\n*4. The paper particularly discussed \"multi-task training\". As far as I can tell, it's multi-dataset instead of multi-task training. The task is human pose estimation with different numbers of joints.*\n\n**A4**: Sorry for the misunderstanding. We will make it clearer in the revised version. The multi-task training in the paper represents joint training with different pose estimation datasets. We denote each different pose estimation dataset as an individual pose estimation task since they have different posture distribution, keypoint annotations, and background scenes. We will change the term in the revised version. It should be noted that except for the different human pose estimation tasks discussed in the paper, we further extend the training paradigms to animal pose estimation (ap10k) [11], whole-body keypoint extraction (including face and foot) [12], and hand pose estimation tasks [13]. The results are demonstrated as follows. The results of ResNet and HRNet variants are taken from the MMPose website. Thanks to the strong representation ability of the vision transformer, ViTPose not only works well on human pose estimation across several tasks, but also performs well on the animal, whole body, and hand keypoint estimation.\n\n| | COCO (AP) | AIC (AP) | MPII (PCKh) | CrowdPose (AP) | OChuman (AP) | AP10K (AP) | WholeBody (AP) | InterHand (AUC) |\n| :----------: | :---------: | :--------: | :-----------: | :--------------: | :------------: | :----------: | :--------------: | :---------------: |\n| ResNet-50 | 71.8 | \\ | 88.2 | 63.7 | 54.6 | 68.1 | 52 | 85.1 |\n| ResNet-101 | 72.6 | 29.4 | 88.8 | 64.7 | 55.9 | 68.1 | 53.3 | \\ |\n| ResNet-152 | 73.5 | \\ | 88.9 | 65.6 | 57 | \\ | 54.8 | \\ |\n| | | | | | | | | |\n| HRNet-w32 | 74.6 | 32.3 | 90 | 67.5 | 59.1 | 72.2 | 55.3 | \\ |\n| HRNet-w48 | 75.6 | 33.5 | 90.1 | \\ | 61.1 | 73.1 | 57.9 | \\ |\n| | | | | | | | | |\n| ViTPose-B | 77.7 | 32.2 | 93.2 | 75.5 | 89.9 | 73.7 | 57.5 | 86.98 |\n| ViTPose-L | 79.1 | 34.2 | 93.9 | 77.6 | 92.5 | 78.3 | 60.1 | 87.59 |\n\n> [11] Yu H, et al. AP-10K: A Benchmark for Animal Pose Estimation in the Wild, Neurips dataset track 2021.\n\n> [12] Jin, Sheng, et al. \"Whole-body human pose estimation in the wild.\" ECCV, 2020.\n\n> [13] Moon, Gyeongsik, et al. \"Interhand2. 6m: A dataset and baseline for 3d interacting hand pose estimation from a single rgb image.\" ECCV, 2020.", " *2. The claimed advantages of simplicity, scalability, flexibility, and knowledge transferability can also be achieved by previous deep learning models. The vision transformer itself is not structurally simpler than CNN; stacking ViT without introducing other modules doesn't mean this model has a simpler structure compared to CNNs. Model size of other models can also be easily controlled by stacking. The training paradigm flexibility is very common in CNNs for humane pose estimation. Instead of using different decoders, particular output channels from different datasets can be masked during training due to the flexibility of heatmaps.*\n\n**A2**: A: Thanks for your suggestion. We will try to address these concerns from several aspects. \n\n1. **The structure simplicity.** We think CNN is actually a good backbone and plays an important role in the development of deep learning for computer vision. We agree with you that the operation of convolution is simple and easy to complement in capturing local structures. To make the CNN models better at capturing global relationships, some fantastic module designs have been proposed, for example, SE attention [1], non-local attention [2], and ASPP [3]. These techniques have been proven beneficial in downstream vision tasks. Although they are not complex, it introduces extra structures like multi-branch structures. The vision transformer, instead, has a stronger representation ability even with an isotropic structure only because they could learn to acquire the inductive bias of locality and the ability to model long-range dependency from data. For example, ResNet is more adaptable to the classic decoder and experiences significant performance drops with the usage of simple bilinear upsample layers, while *ViTPose-B can obtain competitive performance even with the simple bilinear upsample layers*, thanks to the backbone’s strong representation ability, i.e., the vision transformer itself can already encode features of good linear separability for pose estimation. Thus, we think the usage of vision transformers can simplify the encoder and decoder’s design while obtaining competitive performance. Besides, the operations in vision transformers are composed of dense matrix multiplication operations, no matter in attention or FFN modules. Such operations are *hardware-friendly and computational simple* [4]. As demonstrated in Figure 1, it sets a new Parto Front for pose estimation tasks.\n\n > [1] Hu, Jie, Li Shen, and Gang Sun. \"Squeeze-and-excitation networks.\" CVPR 2018.\n\n > [2] Wang, Xiaolong, et al. \"Non-local neural networks.\" CVPR 2018.\n \n > [3] Chen, Liang-Chieh, et al. \"Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs.\" TPAMI 2017.\n\n > [4] Goto, Kazushige, et al. \"Anatomy of high-performance matrix multiplication.\" ACM Transactions on Mathematical Software (TOMS) 34.3 (2008)\n\n2. **The model scaling concerns.** Model scaling [5] is an important topic as it can enhance the model's representation ability and is a trend in future computer vision research for foundation models. However, as discussed in EfficientNet [6], the model’s performance diminishes by simply stacking the convolution layers, and it designs a compound scaling rule to scale the model sizes (about 400M parameters), which needs a heavy and careful model structure tuning process. Thus, it is not easy work to scale up the CNN models. Recently, transformer structures demonstrate a better scaling ability to over 3B parameters in both NLP and vision works, e.g., GPT-3 [7], Switch [8], and Swin-v2 [9] indicate a contiguous growth of representation ability in language or image classification tasks. To this end, we think it is necessary to evaluate ViTPose’s scaling ability and reveal whether the performance of large models will diminish in pose estimation tasks. Fortunately, we observe constant performance gains from ViTPose-B (88M) to ViTPose-G (1B). Such observation proves the scaling ability of vision transformers in pose estimation tasks, which has never been discussed before. \n\n > [5] Bommasani, Rishi, et al. \"On the opportunities and risks of foundation models.\" arXiv, 2021. \n\n > [6] Tan, Mingxing, and Quoc Le. \"Efficientnet: Rethinking model scaling for convolutional neural networks.\" ICML, 2019.\n\n > [7] Floridi, Luciano, and Massimo Chiriatti. \"GPT-3: Its nature, scope, limits, and consequences.\" Minds and Machines, 2020.\n\n > [8] Fedus, William, Barret Zoph, and Noam Shazeer. \"Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity.\" JMLR, 2022.\n\n > [9] Liu, Ze, et al. \"Swin transformer v2: Scaling up capacity and resolution.\" CVPR. 2022.\n\n", " We sincerely thank you for the careful and thoughtful comments. Below we address the key concerns and promise will incorporate all feedback in the revised version.\n\n*1. Although the experiments of this paper are solid, the novelty of this paper is limited as this is an application of ViT to human pose estimations.*\n\n**A1**: Thanks for your suggestion. As pointed out in paper Line *54* and Reviewer *nUSy* and *vNx3*, ViTPose aims to establish a strong and solid vision transformer baseline for human pose estimation tasks without elaborate structure design. It is based on a strong and solid motivation to *explore the strong representation ability* of plain vision transformers for pose estimation tasks. Most of the previous works treat vision transformers blocks as post-processers to model the relationships among different keypoints and refine the feature. Although they have shown promising results in pose estimation, they ignore the vision transformer’s, especially the plain vision transformer’s, strong representation power that the vision transformer itself can already encode features of good linear separability for pose estimation. ViTPose makes a step in this direction and supports the previous claim by reaching competitive performance with only one simple classifier, as shown in Table 1. Besides the good representation modeling ability, ViTPose also shows the community several valuable properties of vision transformers for pose estimation, including simplicity, scalability, flexibility, and transferability, which have not been explored previously. We hope these properties can provide the community with useful insights into adapting and utilizing vision transformers' strong representation ability for different tasks, including but not limited to pose estimation tasks. Having these baselines with simple vision transformers is beneficial to serve as starting points for future studies on pose estimation and other tasks.", " *7. How the multi-task training carried out in Table 6?*\n\n**A7**: Sorry for the misunderstanding. We will make it clearer in the revised version. The multi-task training represents joint training with different pose estimation datasets. We denote each different pose estimation dataset as an individual pose estimation task since they have different posture distribution, keypoint annotations, and background scenes. During training, we randomly sample images from each dataset at each iteration and feed them into the network for feature extraction. Then, the extracted features are fed to dataset-specific heads for pose prediction and loss computation. Except for the different human pose estimation tasks discussed in the paper, we further extend the training paradigms to animal pose estimation (ap10K [1]), whole-body keypoint extraction (including face and foot) [2], and hand pose estimation tasks [3]. The results are shown as follows. The results of ResNet and HRNet variants are taken from the MMPose repository. Thanks to the strong representation ability of the vision transformer, ViTPose not only works well on human pose estimation across several tasks but also performs well on the animal, whole body, and hand keypoint estimation tasks.\n\n| | COCO (AP) | AIC (AP) | MPII (PCKh) | CrowdPose (AP) | OChuman (AP) | AP10K (AP) | WholeBody (AP) | InterHand (AUC) |\n| :----------: | :---------: | :--------: | :-----------: | :--------------: | :------------: | :----------: | :--------------: | :---------------: |\n| ResNet-50 | 71.8 | \\ | 88.2 | 63.7 | 54.6 | 68.1 | 52 | 85.1 |\n| ResNet-101 | 72.6 | 29.4 | 88.8 | 64.7 | 55.9 | 68.1 | 53.3 | \\ |\n| ResNet-152 | 73.5 | \\ | 88.9 | 65.6 | 57 | \\ | 54.8 | \\ |\n| | | | | | | | | |\n| HRNet-w32 | 74.6 | 32.3 | 90 | 67.5 | 59.1 | 72.2 | 55.3 | \\ |\n| HRNet-w48 | 75.6 | 33.5 | 90.1 | \\ | 61.1 | 73.1 | 57.9 | \\ |\n| | | | | | | | | |\n| ViTPose-B | 77.7 | 32.2 | 93.2 | 75.5 | 89.9 | 73.7 | 57.5 | 86.98 |\n| ViTPose-L | 79.1 | 34.2 | 93.9 | 77.6 | 92.5 | 78.3 | 60.1 | 87.59 |\n\nWhat’s more, ViTPose can also be extended to more vision tasks like object tracking, where the pose estimation and tracking pipeline can be jointly formulated using a single model for keypoint tracking tasks. We expect to see more research in this direction to build a foundation model for multiple different vision tasks.\n\n> [1] Yu H, et al. AP-10K: A Benchmark for Animal Pose Estimation in the Wild. Neurips dataset track 2021.\n\n> [2] Jin, Sheng, et al. \"Whole-body human pose estimation in the wild.\" ECCV, 2020.\n\n> [3] Moon, Gyeongsik, et al. \"Interhand2. 6m: A dataset and baseline for 3d interacting hand pose estimation from a single rgb image.\" ECCV, 2020.\n\n*8. The paper should explore the training process and time taken to train such large models.*\n\n**A8**: Thanks for pointing it out. The training of ViTPose-B takes about 4 hours with 8 A100 GPUs on the MS COCO dataset. Training ViTPose-L and ViTPose-H take 12h and 24h, respectively. The performance of these models can further be improved using several techniques like knowledge distillation [4] or features from multiple layers [5]. To accelerate the training process, some mechanisms like token pruning or sparse attention [6,7] can be further explored in the future. The adapter or prompt operations that have proven efficient in NLP works can also be explored further to reduce memory consumption [8,9]. As ViTPose just serves as a simple backbone and a starting point to demonstrate the good properties of plain vision transformers for pose estimation tasks, we believe there is so much that could be explored in future works.\n\n> [4] Wang, Wenhui, et al. \"Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers.\" Neruips, 2020.\n\n> [5] Xie, Enze, et al. \"SegFormer: Simple and efficient design for semantic segmentation with transformers.\" Neruips, 2021.\n\n> [6] Tang, Yehui, et al. \"Patch slimming for efficient vision transformers.\" CVPR. 2022.\n\n> [7] Zaheer, Manzil, et al. \"Big bird: Transformers for longer sequences.\" Neruips, 2020.\n\n> [8] Houlsby, Neil, et al. \"Parameter-efficient transfer learning for NLP.\" ICML. PMLR, 2019.\n\n> [9] Lester, Brian, et al. \"The Power of Scale for Parameter-Efficient Prompt Tuning.\" EMNLP, 2021.", " *5. In table 4: Full+window training memory (28,594) is less than full only training memory (36,141). Any reason?*\n\n**A5**: *‘Full+Window’* indicates that we use interleaved window-based and full attention in the model. Specifically, we insert full attention modules into the models every two window-based attention modules (one for per 1/4 depth position). We will briefly discuss the memory costs of window-based attention and full attention. Denote the resolution of the feature map as $H, W$, and the window size is $h, w$.\n\n- **For full attention modules**\n\n> Full attention module first projects the feature into $Q$, $K$, and $V$ with the same shape $(HW, C)$, where $C$ is the channel dimension. Then, it conducts the attention calculation as $Attn = (Q \\times K^T)$, $Output = Attn \\times V$. The matrix multiplication process of $Q$ and $K^T$ (with shape $(HW, C) \\times (C, HW)$ consumes at least $(HW)^2C$ memory. Then, we get the attention matrix with the shape $(HW, HW)$. Then, we multiply the attention matrix with $V$, i.e., $(HW, HW) \\times (HW, C)$. The process further consumes $(HW)^2C$ memory. Thus, the memory consumption of full attention is $2 (HW)^2C$. \n\n- Similarly, **for window-based attention**\n\n> Window-based attention modules first split the feature map into non-overlapping windows with size $(h, w)$. Then, we conduct the attention operation within each window. Thus, the memory consumption for each window is $2(hw)^2C$. The number of windows of the feature is $H/h \\times W/w$. Thus, the total memory consumption of window-based attention is $2(hw)^2C \\times H/h \\times W/w$, which equals $2hwHWC$. \n\nSince the size of the window ($h, w$) is much smaller than the size of the feature ($H, W$), window-based attention consumes smaller memory than full attention operations. As observed in Table 4, the joint usage of full attention and window-based attention consumes less training memory than all full attention.\n\n*6. (16,12) window size is compared to (8,8) in the rest of the experiments except full. Any justification on selecting 16,12 why not 16,16?*\n\n**A6**: Thanks for your pointing out. We select (16, 12) instead of (16, 16) mainly due to the consideration of reducing extra computations. Firstly, the feature resolution of experiments in Table 4 is *32x24* given the typical input size *256x192* in the human pose estimation tasks. The window size of (16, 12) allows us to partition the feature into 4 non-overlapping windows exactly. In contrast, for the window size of *(16, 16)*, we need to pad the feature to *32x32* for window partition and then conduct the window-based attentions. Such a process of padding and then attention causes extra computational costs and memory consumptions, i.e., (16, 16) window partition causes an out-of-memory issue with full precision training and needs 26,780 training memory with fp16, while (16, 12) window partitions only need 26,778 training memory with full precision training.", " *3. Model complexity should have included GFLOPS and trainable params on top of memory footprint.*\n\n**A3**: Thanks for your advice. Following your advice, we have updated the tables with memory footprint comparison by including GFLOPs and trainable parameters. Specifically, \n\n- Table 4\n\n| Full | Window | Shift | Pool | WindowSize | #Params (M) | GFLOPs | Memory (M) | $AP$ | $AP_{50}$ | $AR$ | $AR_{50}$ |\n| :----------: | :----------: | :----------: | :----------: | :----------: | :-------: | :------: | :---------------: | :-----: | :-----: | :-----: | :-----: |\n| $\\checkmark$ | | | | N/A | 86 | 76.59 | 36,141 | 77.4 | 91.0 | 82.4 | 94.9 |\n| | | | | | | | | | | | |\n| | $\\checkmark$ | | | (8, 8) | 86 | 66.31 | 21,161 | 66.4 | 87.7 | 72.9 | 91.9 |\n| | $\\checkmark$ | $\\checkmark$ | | (8, 8) | 86 | 66.31 | 21,161 | 76.4 | 90.9 | 81.6 | 94.5 |\n| | $\\checkmark$ | | $\\checkmark$ | (8, 8) | 86 | 66.39 | 22,893 | 76.4 | 90.6 | 81.6 | 94.6 |\n| | $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | (8, 8) | 86 | 66.39 | 22,893 | 76.8 | 90.8 | 81.9 | 94.8 |\n| $\\checkmark$ | $\\checkmark$ | | | (8, 8) | 86 | 69.94 | 28,594 | 76.9 | 90.8 | 82.1 | 94.7 |\n| | | | | | | | | | | | |\n| | $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | (16, 12) | 86 | 68.46 | 26,778 | 77.1 | 91.0 | 82.2 | 94.8 |\n\n> We can see that the “no window attention” version has the most FLOPs compared with using window partitions for attention calculation. The number of trainable parameters is the same for all configurations since the attention calculation operation does not involve any parameters but different manners of matrix multiplication. \n\n- Table 5 \n\n| FFN | MHSA | #Params (M) | GFLOPs | Memory (M) | $AP$ | $AP_{50}$ | $AR$ | $AR_{50}$ |\n| :----------: | :----------: | :-------: | :------: | :------: | :----: | :-----: | :----: | :-----: |\n| $\\checkmark$ | $\\checkmark$ | 86 | 17.1 | 14,090 | 75.8 | 90.7 | 81.1 | 94.6 |\n| $\\checkmark$ | | 57 | 10.9 | 11,052 | 75.1 | 90.5 | 80.3 | 94.4 |\n| | $\\checkmark$ | 23 | 6.2 | 10,941 | 72.8 | 89.8 | 78.3 | 93.8 |\n\n> We report the number of parameters and FLOPs of the trainable part and dismiss the frozen part. It can be observed that the FFN part contributes more parameters and FLOPs than the MHSA part. \n\n- Table 7\n\n| Token | Heatmap | Teacher | #Params (M) | GFLOPs | Memory (M) | $AP$ | $AP_{50}$ | $AR$ | $AR_{50}$ |\n| :----------: | :----------: | :---------: | :-------: | :------: | :------: | :-----: | :-----: | :-----: | :-----: |\n| \\ | \\ | \\ | 86 | 17.1 | 14,090 | 75.8 | 90.7 | 81.1 | 94.6 |\n| $\\checkmark$ | | ViTPose-L | 86 | 17.1 | 14,203 | 76.0 | 90.7 | 81.3 | 94.8 |\n| | $\\checkmark$ | ViTPose-L | 86 | 17.1 | 15,458 | 76.4 | 90.8 | 81.6 | 94.8 |\n| $\\checkmark$ | $\\checkmark$ | ViTPose-L | 86 | 17.1 | 15,565 | 76.6 | 90.9 | 81.8 | 94.9 |\n\n> Since there is only one extra token introduced for the token-based distillation, the increase in extra FLOPs and parameters is negligible. However, although the teacher model is frozen and contributes zero to the FLOPs of the trainable part, it causes about a 10\\% memory increase since we need to deploy the teacher model in GPU to provide online supervision given the input images.\n\n*4. What is the accuracy of COCO + AI challenger dataset (Table 4) without using cropped person*\n\n**A4**: Thanks for your suggestion. We conduct the experiment by directly using the images from the training set of the COCO and AI Challenger dataset without any cropping. We train the base models on the dataset for 1600 epochs following the same setting in Table 4 and then use the pre-trained weights to initialize the backbone models in ViTPose-B. The results are as follows. Using COCO + AIC without cropping also obtains competitive performance with using ImageNet-1k for pre-training, i.e., 75.8 AP, although the training data are much less than ImageNet-1k. It further validates the conclusion that pre-training on the data from downstream tasks has better data efficiency, and ViTPose is flexible in using the training data.\n\n| Dataset | Dataset Volume | $AP$ |\n| :-------: | :--------------:| :----:| \n| ImageNet-1k | 1M | 75.8 |\n| COCO + AIC (cropping) | 500K | 75.8 |\n| *COCO + AIC (no cropping)* | *300K* | *75.8* |", " We sincerely thank you for the careful and thoughtful comments. Below we address the key concerns and promise will incorporate all feedback in the revised version.\n\n*1. Novelty of the approach.*\n\n**A1**: Thanks for your suggestion. As pointed out in paper Line *54* and Reviewer *nUSy* and *vNx3*, ViTPose aims to establish a strong and solid vision transformer baseline for human pose estimation tasks without elaborate structure design. Previous works have shown promising results in pose estimation with vision transformers. However, most of them treat vision transformers blocks as post-processers to model the relationships among different keypoints and refine the feature. They ignore the vision transformer's, especially the plain vision transformer's, strong representation ability that the vision transformer itself can already encode features of good linear separability for pose estimation. ViTPose makes a step in this direction and supports the previous claim by reaching competitive performance with only one simple classifier, as shown in Table 1. Besides the good representation modeling ability, ViTPose also shows the community several valuable properties of vision transformers for pose estimation, including simplicity, scalability, flexibility, and transferability, which have not been explored previously. We hope these properties can provide the community with useful insights into adapting and utilizing vision transformers' strong representation ability for different tasks, including but not limited to pose estimation tasks. Having these baselines with simple vision transformers is beneficial to serve as starting points for future studies on pose estimation and other tasks.\n\n*2. What is the difference between knowledge tokens and visual tokens? Which one is more important and why?*\n\n**A2**: Sorry for the misunderstanding. The knowledge token is trained using larger models and is *frozen* during training of smaller models. The visual tokens are directly generated from the input images via patch embedding layers, which are *continuously updated* during the training process. Apart from the difference in the optimization and generation mechanisms for these two types of tokens, their purposes differ in two aspects. 1) The knowledge token gathers useful information from the larger model and acts as a fixed “memory” variable in the small model. Intuitively, via the attention mechanism, the tokens in the small model could “read out” useful information from the knowledge token and learn to encode better feature representation. 2) The visual tokens are generated from the input images and are responsible for modeling the input's pose information. These two kinds of tokens can work together to improve the smaller models’ performance.", " We sincerely thank the reviewers for their thoughtful reviews. We are encouraged that the reviewers appreciate the solid and extensive experiments and ablation studies of ViTPose (Reviewer 6Ljs, ERVW, nUSy, vNx3), the SOTA performance of the proposed method (Reviewer 6Ljs, ERVW, nUSy, vNx3), the importance of the proposed baseline method for the pose estimation field (Reviewer 6Ljs, nUSy), the demonstration of various properties of vision transformers (Reviewer 6Ljs, nUSy, vNx3), and the well-written of the paper (Reviewer 6Ljs, ERVW, nUSy, vNx3).\n\nWe provide detailed responses to each reviewer, respectively, and promise we will incorporate all feedback in the revised version.", " The article presents an experimental evaluation of the vision transformers for human pose estimation tasks. The experimental analysis focuses on transformer’s simplicity in model structure, scalability in model size, flexibility in training, and transferability of knowledge (distillation) from bigger models to smaller ones. The experimental evaluations are carried out on single MS COCO dataset by exploring various strategies such as attention types, image resolutions, pre-trained weights and finetuning. The largest models gives state-of-the-art accuracy on the MS COCO dataset and the authors have mentioned that the code and models will be released for other researchers. The evaluation on other datasets (CrowdPose, OCHuman, MPII, AI Challanger) is provided in supplementary material. Strength:\n\nThe idea is good and is inspired by the lack of baseline evaluations of vision transformers for solving the pose estimation problem. The author has clarified that the paper does not claim algorithm superiority rather extensive evaluation on the MS COCO dataset to justify the benefits of vision transformers in solving pose estimation. \n\nThe paper has justified the usefulness of detailed experimental analysis by focusing on various aspect of vision transformer such as its structure, size, training process and knowledge distillation. \n\nExperimental evaluation using well-known benchmarked human pose dataset of MS COCO is carried out. The performance of the proposed approach is compared to the some recent transformer and/or attention driven approaches.\n\nAblation study involving the benefit of pre-training data, influence of input resolution, impact of attention types, influence of partially finetuning and multi-task learning.\n\nThe article is well-written and easy to follow.\n\nWeakness:\n\nThe proposed idea is an application of vision transformers to human pose estimation thus there is a question mark on the novelty. \nThe model’s computation complexity is presented as training memory. However, the other parameters such as GFLOPs and trainable parameters would have given a better reflection. \n\nIt is unclear how the proposed knowledge token is different form the visual tokens. \n\nAlthough COCO + AI challenger dataset volume is half of ImageNet-1K (Table 2) but cropped person is used for training which is much cleaner dataset in comparison to ImageNet-1K. This could have influenced the performance gap. Any justification?\n\nIn table 4: Full+window training memory (28,594) is less than full only training memory (36,141). Any reason?\n\nIn table 4: (16,12) window size is compared to (8,8) in the rest of the experiments except full. Any justification on selecting 16,12 why not 16,16?\n\nHow the multi-task training carried out in Table 6. Other than pose what other tasks that learned during multi-task learning. \n 1. Novelty of the approach\n\n2. What is the difference between knowledge tokens and visual tokens? Which one is more important and why?\n\n3. Model complexity should have included GFLOPS and trainable params on top of memory footprint.\n\n4. What is the accuracy of COCO + AI challenger dataset (Table 4) without using cropped person\n\n5. In table 4: Full+window training memory (28,594) is less than full only training memory (36,141). Any reason?\n\n6. How the multi-task training carried out in Table 6? The section 5 header says \"Limitation\" but the content does not discuss any limitations. However, it does mention about social impact of machine learning biases and personal privacy. It also mentions about the carbon footprint from data centers using this large-scale model training. \n\nThe paper should explore the training process and time taken to train such large models and how these models could be improved in near future. ", " This paper applied the vision transformer to the task of single human pose estimation. The model uses plain and non-hierarchical vision transformers to extract features from the input image. Then a very simple decoder is added to regress heatmaps for human pose estimation. The method with a large version model achieved 80.9 AP on the COCO test-dev set. The authors also proposed a knowledge distillation method to transfer the knowledge from larger ViT models to smaller models. Strengths \n1) The paper is well-written and easy to read.\n2) The model achieved state-of-the-art performance on the MSCOCO human pose estimation test-dev set.\n3) The paper showed that with the powerful encoding ability of vision transformers, a very simple decoder (a bilinear upsample and one layer convolution) can achieve a very good result.\n4) The authors also proposed a knowledge distillation method to transfer the knowledge from larger ViT models to smaller models, which improved the AP from 75.8 to 76.6 with little computation overhead.\n5) The authors did a lot of detailed ablation studies on the influences of partially fine-tuning, attention type, input resolution and pre-training data.\nWeaknesses\n1) Although the experiments of this paper are solid, the novelty of this paper is limited as this is an application of ViT to human pose estimations.\n2) The claimed advantages of structural simplicity, model size scalability, training paradigm flexibility, and knowledge transferability can also be achieved by previous deep learning models. The vision transformer itself is not structurally simpler than CNN, stacking ViT without introducing other modules doesn't mean this model has a simpler structure compared to CNNs. Model size of other models can also be easily controlled by stacking. The training paradigm flexibility is very common in CNNs for humane pose estimation. Instead of using different decoders, particular output channels from different datasets can be masked during training due to the flexibility of heatmaps.\n3) In lines 76-77, the authors claimed this model can be extended from a single pose dataset to multiple pose datasets. This can be confusing as in human pose estimation, multi-pose datasets normally refer to images with more than one pose to predict. It looks like the authors are describing extending the method to multi-dataset training.\n4) The paper particular discussed \"multi-task training\". As far as I can tell, it's multi-dataset instead of multi-task training. The task is human pose estimation with different numbers of joints. As in the weaknesses, there are some details to be clarified by the authors. Yes. The authors described the limitation of not fully exploring more advanced technologies.", " This paper proposes a simple transformer based architecture for human pose estimation in images. The model is composed of a visual backbone (ViT) along with a simple decoder to obtain the heat maps providing the joint locations. The authors conduct multiple ablations to study the importance of various components (input resolution, number of parameters, ability to perform multi-task training etc). This result in a state of the art model on the COCO Keypoint Detection benchmark while maintaining a fast inference speed thanks to the ViT architecture which is modern hardware friendly. **Strengths**\n\n- The paper is clearly written and well executed\n- The method is simple to understand and performs well. Such strong baseline papers are important to the field as they can be the starting point for even better methods as it is easier to build on top of such approaches.\n- I enjoyed reading the ablations study which is well conducted and properly shed lights on the important components of the approach\n\n**Weaknesses**\n\n- One could argue that there is not much novelty in the paper, as it is mainly a well executed strong baselines. However I still feel the paper in itself is a good enough contribution in itself. \n- I am not sure to understand the motivation behind the learned token for transferability from large models to smaller models (see Questions below). **Question/clarifications**\n\n- What is the intuition behind the \"token-based distillation\"? Since it is trained to only work for a specific set of parameters, I am not sure to understand why adding it to a smaller model would suddenly distill knowledge from the big model to the small. In Table 7, the gains are not very clear (I am not super familiar with the type of variance one may encounter on the COCO keypoint detection benchmark, can the authors confirm whether or not those results are significant?). It would be great if the authors could comment on that point. If this is inspired by past work, it should be cited.\n\n- What is layer-wise decay? How useful is that? Can you cite the paper for this?\n\n**Minor typos**\n\n- L151: double period at the end of the last sentence Limitations and societal impact are adequately discussed in the paper.", " This paper introduces a new baseline for pose estimation with vision transformer architecture. It obtains state-of-the-art performance on the MS COCO dataset even without the usage of elaborate structural designs or complex frameworks. Further, it has simplicity, scalability, flexibility and transferability. Pros:\n\n(1)\tA new baseline for pose estimation with vision transformer architecture is introduced. \n\n(2)\tSOTA results on MS COCO keypoint. \n\n(3)\tComprehensive experiments show its simplicity, scalability, flexibility and transferability. \n\nCons:\n\nThis manuscript introduces a new baseline architecture, which achieves superior performance on the MS COCO keypoint dataset. I think it is a strong paper. However, there are still some questions.\n\nIn Table8, compared to HRFormer, ViTPose has more parameters but with a faster speed (e.g., ViTPose-B vs. HRFormer-B), which should be explained in detail. The authors are also encouraged to employ a smaller model (e.g., ViTPose-S or stack smaller layers) with comparable parameters to previous methods to show the superiority of this baseline. Besides, such a method in this manuscript should be a very simple baseline, and it is easy to think and implement, what is the reason for it surpassing the previous transformer-based method? It should be discussed. \n As listed in Weaknesses, the following questions need to be solved. \n\n(1)\tIntroduce a smaller model for comparison with previous methods.\n\n(2)\tMore explanations about this method and experimental results. \n There exist some biased training data (e.g., gender, race, age) and personal privacy problems, as listed in this manuscript. This might be a general problem in the pose estimation community. These might be alleviated by masking face or using synthetic datasets. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ "fUFmPud53T_", "I4F4wdyIyWm", "qpWxKv4yI-g", "E9jPdR4MC0I", "ZMMvMh8ctVy", "0pmrjJgt2t", "F1K_2ig6GI", "nips_2022_6H2pBoPtm0s", "nips_2022_6H2pBoPtm0s", "5Edx4cWukJV", "TO2Drlc6R98", "DPqq0H43in_", "Rzcqq8Teoc0", "qeph9O4Yy3z", "O1hmiFQgI4I", "G1nIEvXS55U", "V3QsQXQklIA", "oNWIMSRZiPu", "nips_2022_6H2pBoPtm0s", "nips_2022_6H2pBoPtm0s", "nips_2022_6H2pBoPtm0s", "nips_2022_6H2pBoPtm0s", "nips_2022_6H2pBoPtm0s" ]
nips_2022_QNBzcgY0f4e
Easy incremental learning methods to consider for commercial fine-tuning applications
Fine-tuning deep learning models for commercial use cases is growing exponentially as more and more companies are adopting AI to enhance their core products and services, as well as automate their diurnal processes and activities. However, not many countries like the U.S. and those in Europe follow quality data collection methods for AI vision or NLP related automation applications. Thus, on many of these kinds of data, existing state-of-the-art pre-trained deep learning models fail to perform accurately, and when fine-tuning is done on these models, issues like catastrophic forgetting or being less specific in predictions as expected occur. Hence, in this paper, simplified incremental learning methods are introduced to be considered in existing fine-tuning infrastructures of pre-trained models (such as those available in huggingface.com) to help mitigate the aforementioned issues for commercial applications. The methods introduced are: 1) Fisher Shut-off, 2) Fractional Data Retention and 3) Border Control. Results show that when applying these methods on vanilla pre-trained models, the models are in fact able to add more to their knowledge without hurting much on what they had learned previously.
Reject
This paper motivates problems related to fine tuning of pre-trained deep learning models for commercial applications and proposes three solutions for incremental learning: Fisher Shut-off, Fractional Data Retention and Border Control). The reviewers thought the work was well-motivated and they were in agreement that this is a timely and important topic. However, they all found the novelty to be too low and the experiments unconvincing for NeurIPS. Therefore the recommendation is to reject the paper.
train
[ "DdIu6Jddvoj", "2Vjvv7sP_D", "Zr7qul0-gHG", "D8PyJntYJNB", "v68PMXKalQN", "_IkoFCytqcy" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " FDR may not be novel, but Border Control or anything similar has not been proposed yet.\n\nIdea of the paper was to introduce easy incremental learning methods for commercial applications so that an infrastructure could be built out of it. And so visuals on the performance of these methods was key for the paper than results on real-world benchmarks. But experimentation on real-world benchmarks have been put has future work.\n\nThis is because, in the future work, real-world pre-trained models like the T5 and/or GPT-Neo, will be experimented on with their trained datasets as well as new datasets (which has to be synthesized... so lot of work to be done here) with the proposed methods having possible improvements to show effectiveness. So a preliminary paper outlining the main ideas was the focus of this paper.\n\nMay be a change in the paper title that reflects the work more would be considered. ", " 1) The key differences are in the way they mitigate the catastrophic forgetting problem. While FDR and BC are based on Rehearsal Strategies, FS is based on Regularization. I could explicitly mention this in the paper to make it clear\n\n2) The paper focuses on introducing easy methods of incremental learning for commercial applications with visual perspectives on how they perform. The latter being emphasized just to make the proposed incremental learning methods comprehensible to a novice practitioner. Experimental results on real-world benchmark datasets has been mentioned as future work where possible improvements on the proposed methods would also be made.\n\n3) a) The idea was to introduce easy incremental learning methods for commercial applications and show how they perform. Commercial practitioners may or may not be too academically inclined, and so there was no intention of showing that they outperform the existing methods which are much more academically inclined (e.g. Knowledge Distillation). \nb) Table 1 shows results using different hyperparameter values for the proposed methods.\nc) Since FDR and BC are both based on Rehearsal Strategies, with BC being more austere but powerful, mixing FDR with BC to do a FDR + BC did not make much sense to show.\n\n4) Good question. Something that I was much worried about that it would come up. And that's because the batches in the toy dataset which represent the incremental data were formed by applying clustering. Then by Castro et al (2018) which was about picking data points in the proximity of cluster centers just lead me to randomly select and store data from each previous batch.\n\n5) k-farthest data points are not out-of-distribution (or OOD). Each batch is OOD which could also represent the new low quality data points. ", " The toy dataset used in showing the performance of the proposed methods, though not based on benchmark datasets, is representative of a typical benchmark dataset in the 2D space inclusive of some of the kinds of nonlinearities typical benchmark datasets do have.\n\nI could add this statement to my paper if this is not clear.", " Incremental training via fine-tuning of large-scale deep learning models often face challenging issues like catastrophic forgetting. As a remedy, this paper proposes three methods 1) Fisher Shut-off, 2) Fractional Data Retention and 3) Border Control for commercial deep learning fine-tuning scenarios. The effectiveness of the proposed remedies are tested on a toy dataset. Strength:\n\nIncremental learning on commercial applications is an important topic. \n\nWeakness:\n\nThe originality of this paper is not sufficient. While certain ideas are definitely interesting (like the Fisher shut off), most of the proposed strategies already exist, so the value of information from this paper would be quite limited. \n\nExperiment section must be further enriched. I don't quite get the logic of illustrating the effectiveness of the proposed methods on a toy dataset, while one of the selling points of this paper is \"commercial fine tuning applications\".\n\nI think the presentation of this paper can be much improved. Details should be added to the included strategies, e.g., section 3.3 N/A. N/A.", " This paper introduces three simplified methods for incremental learning. The three methods are Fisher Shut-off, Fractional Data Retention and Border Control, which are all adapted from previous works. Fisher Shut-off select weights in models that take part in fine-tuning. Fractional Data Retention and Border Control retains a fraction of data from earlier tasks and dataset. All the methods are illustrated with a toy dataset, and the author claims that they can be used in real world datasets. Strengths: \n\nThe proposed methods are simple and easy to implement.\n\nWeakness:\n\n1. Lack of originality, the methods proposed in this paper are mainly built on other works. The differences are not clearly shown in the paper.\n\n2. The evaluation is on a toy dataset and it is too simple to support the claims that the proposed methods can be used in real-world applications. \n\n3. The data selection method designed in Fractional data retention and Border control is not well studied and evaluated in this paper. Deep learning data selection is studied in many topics, i.e, active learning, model repair, and test data selections. To evaluate the effectiveness of the Fractional data retention and Border control, the authors should show that their methods are better than random selection and other commonly used data selection methods. [[1](https://arxiv.org/pdf/1708.00489.pdf),), [2](https://arxiv.org/pdf/1906.03671.pdf)]\n\n4. The presentation of this paper is poor. 1. All three methods are adapted from previous works, what are the key differences?\n\n2. Experiments are conducted on a toy dataset. Please provide experimental results on real-world datasets to support the claims in this work.\n\n3. Even for the toy dataset experiment, \n - Why there are no existing techniques used for comparison? \n - I am not sure if the experiments in Table 1, i.e, FDR 10% and FDR 20%, or BC (topk=5) and BC (topk=10), have a different percentage of retained data. If so, a direct comparison between them is not fair.\n - Experiments with incremental learning without FS are essential.\n\n3. FDR requires data clustering, which technique do you use in the experiment? Do different clustering techniques largely affect the result?\n\n4. In BC, can you provide some proof that selecting top-k farthest data points are more effective than random selection? Since the paper is claimed for real-world datasets with low-quality data, it is likely that the top-k farthest data points are out of distribution, which provides limited help. The limitation of this paper is partly addressed in Conclusion.", " This paper aims to address the incremental learning of pre-trained models for practical commercial scenarios. No novel method is proposed, and this work seems like a course report. Pros:\n- None\n\nCons:\n- There is not even a novel method proposed to address the problem of focus.\n- Plenty of existing works in the NLP and even the CV domain have been ignored. See the references [1-6] below.\n- All the results are evaluated on very small synthetic datasets; none of the existing benchmarks has been investigated. Moreover, the existing state-of-the-art baselines have not been compared against.\n\n[1] Jang, Joel, et al. \"Towards Continual Knowledge Learning of Language Models.\" International Conference on Learning Representations. 2022. \n[2] Lin, Bill Yuchen, et al. \"On Continual Model Refinement in Out-of-Distribution Data Streams.\" Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022.\n[3] Ermis, Beyza, et al. \"Continual Learning With Transformers for Image Classification.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n[4] Jin, Xisen, et al. \"Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora.\" Challenges & Perspectives in Creating Large Language Models (2022): \n[5] LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5\n[6] Pelosin, Francesco, et al. \"Towards exemplar-free continual learning in vision transformers: an account of attention, functional and weight regularization.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. None See the Cons above." ]
[ -1, -1, -1, 2, 1, 2 ]
[ -1, -1, -1, 4, 5, 5 ]
[ "D8PyJntYJNB", "v68PMXKalQN", "_IkoFCytqcy", "nips_2022_QNBzcgY0f4e", "nips_2022_QNBzcgY0f4e", "nips_2022_QNBzcgY0f4e" ]
nips_2022_Ho6oWAslz5L
Saliency-Aware Neural Architecture Search
Recently a wide variety of NAS methods have been proposed and achieved considerable success in automatically identifying highly-performing architectures of neural networks for the sake of reducing the reliance on human experts. Existing NAS methods ignore the fact that different input data elements (e.g., image pixels) have different importance (or saliency) in determining the prediction outcome. They treat all data elements as being equally important and therefore lead to suboptimal performance. To address this problem, we propose an end-to-end framework which dynamically detects saliency of input data, reweights data using saliency maps, and searches architectures on saliency-reweighted data. Our framework is based on four-level optimization, which performs four learning stages in a unified way. At the first stage, a model is trained with its architecture tentatively fixed. At the second stage, saliency maps are generated using the trained model. At the third stage, the model is retrained on saliency-reweighted data. At the fourth stage, the model is evaluated on a validation set and the architecture is updated by minimizing the validation loss. Experiments on several datasets demonstrate the effectiveness of our framework.
Accept
This paper proposed a novel method that reweights data using saliency maps and searches architecture using saliency-reweighted data. There are four official reviewers for this submission. The reviewers consistently agree with the novelty, presentation, and experimental validation of this submission. The ratings are: borderline accept/accept/weak accept. The concerns raised by the reviewers are well addressed during the rebuttal. Thus the AC would like to recommend acceptance.
train
[ "YlZ3cz-4ms9", "KcMezNfu05s", "l0jFlQRR3Uy", "OOWKUrakmZ", "UFJQU87EDmu", "QP2qfml677G", "DMibAvHci1e", "-zZGKtjmewF", "HWdsUHTnFW9", "Gx7pkD8gon1", "df_5cUA2F_", "7XDBFzKilW" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for reading our response and raising the score. We highly appreciate the reviewer's constructive suggestions.", " Thank you for your feedback. The paper seems a good one to be further discussed officially in the community. I am raising the score.", " We would like to thank the reviewer for reading our response and supporting the acceptance of our paper. The constructive and valuable suggestions from the reviewer and other reviewers help to improve this paper a lot. We highly appreciate these suggestions, which are incorporated into the rebuttal revision. ", " I want to thank the authors for incorporating the suggestions from the other reviewers and me. They have done significant work adding new experiments and evaluations that strengthen the paper. I am satisfied with all of the changes, and I continue to think this paper should be accepted.", " We would like to thank the reviewer for the positive and constructive feedback. \n\n1. Thanks for mentioning the line of research on saliency-based network pruning. We looked at some papers in this field, which is indeed an interesting direction that our work can be bridged with, in future studies. Specifically, we plan to investigate the following ideas: 1) formulate saliency-based network pruning as a saliency-aware neural architecture search problem and automatically search for the optimal pruning decisions based on detected saliency; 2) extend the notion of saliency from input data to blocks in neural networks, develop multi-level optimization based framework to detect the saliency of blocks, and perform pruning on blocks based on detected saliency. \n\n We added such discussions in Lines 403-410 in the rebuttal revision. \n\n\n2. For the limitations of our method, we discussed them in the last section in the initial submission and further expanded the discussion in Lines 387-399 and 656-683 in the rebuttal revision.\n\n", " \n\nWe would like to thank the reviewer for the positive and constructive feedback. In the submitted rebuttal revision paper, we have addressed the weaknesses mentioned by the reviewer. The updates are marked with blue color. We summarize how these weaknesses are addressed and answer the reviewer's questions below. \n \n1. **Q**: Comment on what makes the proposed approach efficient.\n \n **A**: We improved computational efficiency from both the algorithm side and implementation side. \n \n On the algorithm side, we speed up computation by approximating the optimal solution at each stage using a one-step gradient descent update and reducing the update frequencies of some parameters. Specifically, we update the architecture $A$ every 5 mini-batches, update model weights $W_2$ and perturbations $\\delta$ every 3 mini-batches, and update $W_1$ on every mini-batch. We empirically found that reducing the update frequencies of $A$, $W_2$, and $\\delta$ greatly speeds up convergence without significantly sacrificing performance. \n Besides, when calculating hypergradients of $A$, we recursively approximate matrix-vector multiplications using finite-difference calculations, which reduces the computation cost from being quadratic in matrix dimensions down to linear. \n\n On the implementation side, we speed up computation by leveraging techniques and tricks including 1) automatic mixed precision, 2) using multiple (4, specifically) workers and pinned memory in PyTorch DataLoader, 3) using cudNN autotuner, 4) kernel fusion, etc. \n\n We added these discussions in Lines 694-707 in the rebuttal revision.\n\n$~$\n\n \n2. **Q**: In table 7, how does one judge which words should be deemed most salient?\n \n **A**: The prediction task corresponding to this table is sentiment classification. A word is more salient if it has a stronger correlation with a sentiment. For example, the word “entertaining” implies a positive sentiment, and therefore is considered to be salient. We added these discussions in Lines 689-693 in the rebuttal revision.\n\n$~$\n\n \n \n3. **Q**: Is it possible to extend the basic notion of saliency to be adapted to evolutionary or reinforcement learning based methods?\n \n **A**: It is possible to perform saliency-aware architecture search in reinforcement learning (RL) based NAS methods, as follows. First, use an RL controller to generate a set of candidate architectures. Second, given a candidate architecture, train its weight parameters on a training dataset. Third, given the trained model, perform adversarial attacks to detect the saliency maps of the training data, similar to stage II in our method. Fourth, use saliency maps to reweight training data and retrain the model on reweighted data. Fifth, evaluate the retrained model on a validation set and use validation accuracy as a reward of this architecture. Repeat steps 2-5 for every candidate architecture, calculate the mean reward on all candidate architectures, and update the RL controller by maximizing the mean reward using policy gradient. These procedures repeat until convergence. Similar procedures can be conducted to perform saliency-aware architecture search in evolutionary algorithm based NAS methods. In these approaches, gradient-based optimization cannot be used any more since the objectives are not differentiable. \n\n We added these discussions in Lines 657-671 in the rebuttal revision.\n \n \n$~$\n\n \n \n4. **Q**: To what degree the hyperparameter tuning is easy/difficult to achieve?\n \n **A**: Most hyperparameters in our method follow their default values used in baseline methods. The only hyperparameter needing to be tuned is the tradeoff parameter $\\gamma$. To tune $\\gamma$ on CIFAR-100, we randomly sample 2.5K data from the 25K training set and sample 2.5K data from the 25K validation set. Then we use the 5K sampled data as a hyperparameter tuning set. $\\gamma$ is tuned in {0.1, 0.5, 1, 2, 3}. For each configuration of $\\gamma$, we use the remaining 22.5K training data and 22.5K validation data to perform architecture search and use their combination to perform architecture evaluation (retraining a larger stacked network from scratch). Then we measure the performance of the stacked network on the 5K sampled data. $\\gamma$ value yielding the best performance is selected. For $\\gamma$ in CIFAR-10 and ImageNet experiments, we simply used the value tuned on CIFAR-100 and didn’t conduct further tuning.\n\n We added these discussions in Lines 708-718 in the rebuttal revision.\n\n$~$\n\n \n \n5. **Q**: Make the visualization of saliency maps in Figure 2 be more visible/clear.\n \n **A**: In rebuttal revision, we enlarged the size of saliency maps and added original images in Figure 3 and 5 to make the visualization of saliency maps more visible. \n\n$~$\n\n \n \n6. **Q**: Restate the central contributions in the conclusion section.\n \n **A**: In rebuttal revision, we have added more discussions of the central contributions in Lines 376-386. \n\n\n", " We'd like to thank the reviewer for the positive and constructive feedback. In the submitted rebuttal revision paper, we have addressed the weaknesses mentioned by the reviewer. We summarize how these weaknesses are addressed and answer the reviewer's questions below. \n \n1. **Q**: Experiment with other saliency methods. \n \n **A**: In the rebuttal revision (Table 6 and Lines 306-318), we experimented with two more saliency methods: integrated gradients and SmoothGrad. The table below shows the results. Our framework with IntegratedGrad and SmoothGrad still outperforms Darts2nd and Pdarts. This demonstrates that our framework is a general one that generalizes beyond a single saliency method. Second, IntegratedGrad and SmoothGrad perform worse than Adversarial Saliency. A possible reason is: IntegratedGrad and SmoothGrad restrict the definition of saliency to be gradient-based. In contrast, Adversarial treats saliency scores as optimization variables and automatically learns them by solving an optimization problem, which is more flexible. \n\n | Method | Error on CIFAR-100 | Error on CIFAR-10|\n | ------------- |-------------| -----|\n |Darts2nd| 20.58±0.44 | 2.76±0.09 |\n |IntegratedGrad-Darts2nd| 16.92±0.08 | 2.62±0.06 |\n |SmoothGrad-Darts2nd| 17.05±0.11 | 2.59±0.03 |\n |Adversarial-Darts2nd| **16.42**±0.09| **2.54**±0.05|\n |||||\n |Pdarts| 17.52±0.06 | 2.54±0.04 |\n |IntegratedGrad-Pdarts| 15.83±0.08 | 2.47±0.03 |\n |SmoothGrad-Pdarts| 15.81±0.05 | 2.48±0.04 |\n |Adversarial-Pdarts| **15.16**±0.09| **2.45**±0.03|\n |||||\n\n$~$\n\n \n2. **Q**: Would SANAS apply to nondifferentiable NAS methods? Why SANAS can not easily be applied to nondifferentiable NAS methods and what are the implications?\n \n **A**: By changing the gradient-based optimization algorithm to non-gradient-based algorithm, we can apply SANAS to nondifferentiable NAS methods, such as reinforcement learning (RL) based NAS methods, as follows. First, use an RL controller to generate candidate architectures. Second, given a candidate architecture, train it on a training dataset. Third, given the trained model, perform adversarial attacks to detect saliency maps of training data. Fourth, use saliency maps to reweight training data and retrain the model on reweighted data. Fifth, evaluate the retrained model on a validation set and use validation accuracy as a reward of this architecture. Repeat steps 2-5 for every candidate architecture, calculate the mean reward, and update the RL controller by maximizing the mean reward. These procedures repeat until convergence. Similar procedures can be conducted to perform saliency-aware NAS in evolutionary algorithm based NAS methods.\n \n The **reason that SANAS can not easily be applied to nondifferentiable NAS methods** is that SANAS uses a gradient-based optimization algorithm to solve the multi-level optimization problem. For nondifferentiable NAS methods, their nondifferentiable objective functions do not have gradients. \n \n The **implication** is: if we want to apply SANAS to nondifferentiable NAS methods, we have to change the gradient-based optimization algorithm to some other non-gradient-based algorithms.\n \n We added these discussions in Lines 388-393 and 657-671 in rebuttal revision. \n \n$~$\n\n \n3. **Q**: Perform saliency map tests for the adversarial saliency method. \n \n **A**: In the rebuttal revision (Lines 249-258), we evaluated saliency maps using model parameter cascading randomization tests. Figure 2(left) in the rebuttal revision shows that saliency maps change considerably as more layers are randomized. Figure 2(right) shows the Spearman rank correlation between original saliency maps and randomized saliency maps consistently decreases as more layers are randomized. These results demonstrate that saliency maps generated by the adversarial saliency method are sensitive to model parameters and pass the sanity check.\n\n$~$\n\n \n \n4. **Q**: Discuss other limitations of SANAS. When to use SANAS and when not? How to trade off between time cost and performance?\n \n **A**: In the rebuttal revision (Lines 388-399 and 656-683), we expanded the limitation section. As pointed out by the reviewer, one limitation of our method is the time cost. Another downside is that SANAS is mathematically more complicated than baselines, which adds some extra difficulty for usage. \n \n It is recommended to use SANAS in applications that strongly need high-performance architectures capable of generating sensible saliency maps but do not have strong efficiency requirements on architecture search time. For applications which have high restrictions on search cost but allow sacrificing some performance and ignoring saliency maps, other NAS methods might be better choices. \n\n \n$~$\n\n \n5. We added four more examples of text saliency in Table 10 (on page 17) in rebuttal revision. \n \n6. We addressed comments marked as minor in rebuttal revision.\n ", " We would like to thank the reviewer for the positive and constructive feedback. In the submitted rebuttal revision paper, we have addressed the weaknesses mentioned by the reviewer. The updates are marked with blue color. We summarize how these weaknesses are addressed and answer the reviewer's questions below. \n \n1. **Q**: The third contribution at the end of the introduction section should be removed. \n \n **A**: In the rebuttal revision, we have removed the third contribution from the introduction section. \n \n $~$\n\n2. **Q**: There should be some ablation studies to validate the effectiveness of various designs.\n \n **A**: In our initial submission, we reported ablation studies on 1) saliency reweighting mechanisms in Lines 285-301 of the main paper, 2) sensitivity on the hyperparameter $\\gamma$ in Lines 302-306 of the main paper, and 3) performing stage I and II by minimizing a single objective in Section 5 of the supplementary material. \n \n **In the rebuttal revision (Table 6 and 8, and Lines 306-334), we added three more ablation studies**, which 1) perform the four stages separately (denoted as Separate) instead of end-to-end; 2) perform stages 1-3 by minimizing the weighted sum of their objective functions in a multi-task learning (MTL) way (denoted as MTL); and 3) compare the adversarial attack based saliency detection method with other saliency detection methods including Integrated Gradient (IntegratedGrad) and SmoothGrad. These studies were performed on Darts2nd and Pdarts. \n **The table below shows results on Separate and MTL**. From this table, we make two observations. First, our end-to-end method works better than Separate which conducts the four stages separately. Conducting the four stages end-to-end can enable them to mutually influence each other to achieve the best overall performance. In contrast, when conducted separately, earlier stages cannot be influenced by later stages (e.g., stage I cannot be influenced by stage IV), which leads to worse performance. Second, our method performs better than MTL. The tasks in stages I-III have an inherent order: before detecting saliency maps using a model, we first need to train this model; before training the second model on saliency-reweighted data, we need to detect the saliency maps first. MTL performs these three tasks simultaneously by minimizing a single objective, which breaks their inherent order and therefore leads to worse performance. In contrast, our method preserves this order using multi-level optimization. \n \n | Method | Error on CIFAR-100 | Error on CIFAR-10|\n | ------------- |-------------| -----|\n |Separate-Darts2nd| 18.05±0.27 | 2.68±0.06 |\n |MTL-Darts2nd| 18.26±0.12 |2.70±0.05 |\n |Ours-Darts2nd| **16.42**±0.09| **2.54**±0.05|\n |||||\n |Separate-Pdarts| 16.49±0.07 |2.51±0.03 |\n |MTL-Pdarts| 16.83±0.10 |2.52±0.04 |\n |Ours-Pdarts| **15.16**±0.09| **2.45**±0.03|\n |||||\n\n **The table below shows results on IntegratedGrad and SmoothGrad**, where we make two observations. First, our framework with IntegratedGrad and SmoothGrad as saliency detection methods still outperforms Darts2nd and Pdarts. This demonstrates that our framework is a general one that generalizes beyond a single saliency detection method. Second, IntegratedGrad and SmoothGrad perform worse than Adversarial. A possible reason is: IntegratedGrad and SmoothGrad restrict the definition of saliency to be gradient-based. In contrast, Adversarial treats saliency scores as optimization variables and automatically learns them by solving an optimization problem, which is more flexible. \n\n | Method | Error on CIFAR-100 | Error on CIFAR-10|\n | ------------- |-------------| -----|\n |Darts2nd| 20.58±0.44 | 2.76±0.09 |\n |IntegratedGrad-Darts2nd| 16.92±0.08 | 2.62±0.06 |\n |SmoothGrad-Darts2nd| 17.05±0.11 | 2.59±0.03 |\n |Adversarial-Darts2nd| **16.42**±0.09| **2.54**±0.05|\n |||||\n |Pdarts| 17.52±0.06 | 2.54±0.04 |\n |IntegratedGrad-Pdarts| 15.83±0.08 | 2.47±0.03 |\n |SmoothGrad-Pdarts| 15.81±0.05 | 2.48±0.04 |\n |Adversarial-Pdarts| **15.16**±0.09| **2.45**±0.03|\n |||||", " This paper proposes an end-to-end framework which dynamically detects saliency of input data, reweights data using saliency maps, and searches architectures on saliency-reweighted data. The proposed framework is based on four-level optimization, which performs four learning stages in a unified way. Experiments on several datasets demonstrate the effectiveness of the proposed framework. This paper is interesting. It tries to address the limitation in existing NAS methods which treat all data elements as being equally important. Experiments show that the proposed NAS method achieves good performance.\n\nAt the end of the introduction part, the third contributions should be removed, which is a common part of a scientific paper rather than a special contribution.\n\nThere should be some ablation studies to validate the effectiveness of various designs (like the four stages) in the proposed method. This is important for better understanding the proposed method.\n Please see the above commons. Please see the above commons.", " Saliency-Aware Neural Architecture Search presents a four-step neural architecture search (NAS) procedure that reweights inputs based on their saliency value:\n1. The original architecture is trained on the original dataset.\n2. Using the trained model, an adversarial saliency method generates saliency maps for all inputs. The features of the inputs are reweighted based on their saliency value.\n3. The original architecture is retrained on the saliency weighted inputs.\n4. The model trained on the saliency weighted inputs is evaluated on the validation dataset. A new loss-minimizing architecture is selected using a traditional differentiable NAS method (e.g., DARTS).\n\nEach of the four steps is repeated until convergence. \n\nThe saliency-aware neural architecture search is evaluated on image classification tasks using CIFAR-10, CIFAR-100, and ImageNet and on text classification tasks using GLUE. The proposed method is combined with other differentiable NAS methods like DARTS-2nd, PC-DARTS, PR-DARTS, and P-DARTS. It is evaluated against non-NAS models, NAS-selected models, and other saliency-based training procedures to understand changes in performance, training time, and robustness. The saliency results are visualized and evaluated based on their alignment with human expectations. **Strengths**\n* *Evaluative methods* --- The paper does a comprehensive evaluation of the SANAS method. They evaluate text and image modalities, various NAS methods, NAS and non-NAS models, and related saliency-based training algorithms. In addition, they include robustness tests and an ablation study on saliency reweighting.\n* *Evaluative results* --- The results show that SANAS results in models with lower error rates while maintaining reasonable parameter and time costs. SANAS also results in better robustness to overfitting and prevents performance collapse. \n* *Clarity and Reproducibility* --- The paper and appendix are very well written and thorough. The method was easy to follow. All necessary details are included, including hyperparameter settings, model parameters, optimization algorithms, human study details, implementation details, and code packages for significant testing. It would be easy to reproduce.\n\n**Weaknesses**\n* *Limitations and societal implications* --- The limitations and societal implications section of this paper lacks detail. An expanded limitations section would help readers understand when to use SANAS over an alternative method and the specific benefits of SANAS.\n* *Saliency evaluation* --- The paper evaluates its saliency maps by visualizing them to human users who judge their alignment with human expectations. The paper states that visualizing the saliency maps shows that the \"method is effective in generating correct saliency maps.\" However, this evaluation assumes the model has learned the same features humans have (i.e., the saliency should highlight the object of interest). While ideally, the model would learn features that align with human expectations, research has shown that models often learn to rely on spurious features. Saliency methods should be faithful to the model's representations. If the model's representations do not align with a human's, then the saliency should not be sensible to a human interpreter. Other research has shown that sensible-looking saliency methods can mask the underlying model's behavior and inhibit proper interpretation of the results. The main contribution of the paper is not a new saliency method. Still, if the authors evaluate the saliency maps, they should use saliency map tests such as model parameter randomization tests or data randomization tests (see Sanity Checks for Saliency Maps by Adebayo et al.).\n* *Qualitative saliency results* --- The text examples show an example text saliency map. One example is not sufficient to claim the SANAS method is better than EC and GMPGC. Please include additional examples in the appendix. \n* *Saliency methods* --- The paper only uses an adversarial saliency method in its framework. The experiments evaluate against other saliency-based training procedures (EC, CDEP, and GMPGC) that use different saliency methods. However, these procedures differ from SANAS because they add regularization terms to the loss function instead of reweighting the dataset. Comparing to other saliency methods would strengthen the paper. If you found similarly positive results, it would validate that the benefit of SANAS generalizes beyond a single saliency method. However, even if these methods do not work as well, it would be interesting scientifically and shed light on potential differences between adversarial and non-adversarial methods.\n\n**Minor Comments**\n* Table 1: please explain what the asterisks represent.\n* Line 138: \"bottom to up\" --> \"bottom-up\"\n* Line 71 and 253: please expand on the unreliability of GradCAM or cite prior work.\n 1. What would happen if you switched the adversarial saliency method in step 3 for other types of saliency methods like integrated gradients, SmoothGrad, GradCAM, LIME, Sufficient Input Subsets, etc.? Would SANAS apply to nondifferentiable NAS methods if you used a non-gradient-based saliency method like LIME or Sufficient Input Subsets?\n2. How does the adversarial saliency method do on existing saliency map tests (see Sanity Checks for Saliency Maps by Adebayo et al.)? Can you please expand on other limitations of your method? Should I always use SANAS instead of a different differentiable NAS approach? \n\nFor instance, there seems to be a time cost of SANAS (e.g., Table 3 lists 1.1 GPU days compared to P-DARTS 0.3 GPU days). How should someone trade off between time cost and potential performance benefit?\n\nSection 5 mentions that SANAS can not easily be applied to nondifferentiable NAS methods. Why is this and what are the implications? Are there ways to use it with nondifferentiable NAS methods (e.g., if you used a non-gradient based saliency method like LIME)?", " This paper leverages the classic differentiable architecture search strategy for neural architecture search through a complex optimization process that involves training that reweights architectural parameters and weights according to a strategy where pixel values are biased by a saliency score that derives from the degree of effect on accuracy subject to perturbation of pixel values. Strengths:\n\n1. The work successfully demonstrates that the proposed saliency measure in the context of the optimization carried out in [40] gives rise to improved neural architectures.\n\n2. The optimization procedure, albeit complex, is formulated in a way that is relatively straightforward to optimize outside of the role that hyperparameter sensitivity may play in the process.\n\nWeaknesses:\n\n1. It's not clear to what degree the hyperparameter tuning is easy/difficulty to achieve although the ablation studies and extra analysis presented shows some confidence for general purpose use of this method.\n\n2. Visualization of saliency maps in figure 2 could be more visible/clear.\n\n3. The conclusion focuses too much on the limitations/weaknesses and should restate the central contributions. 1. For the four level optimization framework, this appears to be very complex but the cost values in the results tables suggests it is very efficient. Could the authors comment a bit more on what makes this approach so efficient given an apparently highly complex loss function?\n\n2. In table 7, how does one judge which words should be deemed most salient? Anecdotally what is shown seems to make sense, but further clarity on how to judge this would be interesting.\n\n3. The paper admits that for evolutionary or reinforcement based methods that the approach can't be used. However, is it possible to extend the basic notion of saliency that is presented here to be adapted to such a method? If not, what is the central limitation? The authors have discussed both limitations and potential for negative societal impact which is minimal.", " The paper unifies a four step optimization solution for obtaining a neural architecture, considering the salience of data. The paper is well written and provides a well articulated explanation of the solution and results. The paper provides sufficient amount of experiments that support the usefulness of the proposed approach. I do not find a particular weakness for this work. There is a relevant line of research in pruning of neural networks using explanations that maps to similar salience maps. It would be worth looking into that area as there seems a nice bridge between the two line of research. No specific limitation discussed " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "KcMezNfu05s", "UFJQU87EDmu", "OOWKUrakmZ", "DMibAvHci1e", "7XDBFzKilW", "df_5cUA2F_", "Gx7pkD8gon1", "HWdsUHTnFW9", "nips_2022_Ho6oWAslz5L", "nips_2022_Ho6oWAslz5L", "nips_2022_Ho6oWAslz5L", "nips_2022_Ho6oWAslz5L" ]